2025-08-29 20:11:06.522909 | Job console starting 2025-08-29 20:11:06.546059 | Updating git repos 2025-08-29 20:11:06.606039 | Cloning repos into workspace 2025-08-29 20:11:06.804681 | Restoring repo states 2025-08-29 20:11:06.837393 | Merging changes 2025-08-29 20:11:06.837453 | Checking out repos 2025-08-29 20:11:07.057569 | Preparing playbooks 2025-08-29 20:11:07.790965 | Running Ansible setup 2025-08-29 20:11:12.291757 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-08-29 20:11:12.976230 | 2025-08-29 20:11:12.976348 | PLAY [Base pre] 2025-08-29 20:11:12.992908 | 2025-08-29 20:11:12.993032 | TASK [Setup log path fact] 2025-08-29 20:11:13.011810 | orchestrator | ok 2025-08-29 20:11:13.029034 | 2025-08-29 20:11:13.029171 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 20:11:13.068276 | orchestrator | ok 2025-08-29 20:11:13.079330 | 2025-08-29 20:11:13.079442 | TASK [emit-job-header : Print job information] 2025-08-29 20:11:13.132089 | # Job Information 2025-08-29 20:11:13.132313 | Ansible Version: 2.16.14 2025-08-29 20:11:13.132365 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-08-29 20:11:13.132430 | Pipeline: post 2025-08-29 20:11:13.132466 | Executor: 521e9411259a 2025-08-29 20:11:13.132498 | Triggered by: https://github.com/osism/testbed/commit/d9ad4f6b8bc12a1c7da3cda66994d49253cfac73 2025-08-29 20:11:13.132531 | Event ID: 47bad132-8514-11f0-9782-5ac39ffbd6c0 2025-08-29 20:11:13.141762 | 2025-08-29 20:11:13.141877 | LOOP [emit-job-header : Print node information] 2025-08-29 20:11:13.251840 | orchestrator | ok: 2025-08-29 20:11:13.252119 | orchestrator | # Node Information 2025-08-29 20:11:13.252170 | orchestrator | Inventory Hostname: orchestrator 2025-08-29 20:11:13.252196 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-08-29 20:11:13.252219 | orchestrator | Username: zuul-testbed03 2025-08-29 20:11:13.252241 | orchestrator | Distro: Debian 12.11 2025-08-29 20:11:13.252266 | orchestrator | Provider: static-testbed 2025-08-29 20:11:13.252288 | orchestrator | Region: 2025-08-29 20:11:13.252310 | orchestrator | Label: testbed-orchestrator 2025-08-29 20:11:13.252330 | orchestrator | Product Name: OpenStack Nova 2025-08-29 20:11:13.252350 | orchestrator | Interface IP: 81.163.193.140 2025-08-29 20:11:13.279572 | 2025-08-29 20:11:13.279704 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-08-29 20:11:13.698475 | orchestrator -> localhost | changed 2025-08-29 20:11:13.712647 | 2025-08-29 20:11:13.712784 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-08-29 20:11:14.608338 | orchestrator -> localhost | changed 2025-08-29 20:11:14.622062 | 2025-08-29 20:11:14.622163 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-08-29 20:11:14.878207 | orchestrator -> localhost | ok 2025-08-29 20:11:14.886886 | 2025-08-29 20:11:14.886999 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-08-29 20:11:14.918069 | orchestrator | ok 2025-08-29 20:11:14.937752 | orchestrator | included: /var/lib/zuul/builds/fe7640c6ad7b40cc86499111616a1a68/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-08-29 20:11:14.945189 | 2025-08-29 20:11:14.945280 | TASK [add-build-sshkey : Create Temp SSH key] 2025-08-29 20:11:16.635875 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-08-29 20:11:16.636459 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/fe7640c6ad7b40cc86499111616a1a68/work/fe7640c6ad7b40cc86499111616a1a68_id_rsa 2025-08-29 20:11:16.636581 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/fe7640c6ad7b40cc86499111616a1a68/work/fe7640c6ad7b40cc86499111616a1a68_id_rsa.pub 2025-08-29 20:11:16.636657 | orchestrator -> localhost | The key fingerprint is: 2025-08-29 20:11:16.636732 | orchestrator -> localhost | SHA256:DBMn/meatnC9Rq9WCZ6TpU6rXS4DJIYKkvF39NU523o zuul-build-sshkey 2025-08-29 20:11:16.636795 | orchestrator -> localhost | The key's randomart image is: 2025-08-29 20:11:16.636882 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-08-29 20:11:16.636946 | orchestrator -> localhost | | o . . . | 2025-08-29 20:11:16.637008 | orchestrator -> localhost | |. ..+ . + | 2025-08-29 20:11:16.637066 | orchestrator -> localhost | | + o+. . + | 2025-08-29 20:11:16.637122 | orchestrator -> localhost | |+ . o +=o . o . | 2025-08-29 20:11:16.637178 | orchestrator -> localhost | |.. o o oS.o* o | 2025-08-29 20:11:16.637241 | orchestrator -> localhost | | . .*O + E | 2025-08-29 20:11:16.637299 | orchestrator -> localhost | | . ==.=.. | 2025-08-29 20:11:16.637355 | orchestrator -> localhost | | + oB+. | 2025-08-29 20:11:16.637452 | orchestrator -> localhost | | o+++. | 2025-08-29 20:11:16.637542 | orchestrator -> localhost | +----[SHA256]-----+ 2025-08-29 20:11:16.637691 | orchestrator -> localhost | ok: Runtime: 0:00:01.223492 2025-08-29 20:11:16.653319 | 2025-08-29 20:11:16.653527 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-08-29 20:11:16.701348 | orchestrator | ok 2025-08-29 20:11:16.715240 | orchestrator | included: /var/lib/zuul/builds/fe7640c6ad7b40cc86499111616a1a68/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-08-29 20:11:16.725055 | 2025-08-29 20:11:16.725158 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-08-29 20:11:16.750473 | orchestrator | skipping: Conditional result was False 2025-08-29 20:11:16.758633 | 2025-08-29 20:11:16.758738 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-08-29 20:11:18.645309 | orchestrator | changed 2025-08-29 20:11:18.651742 | 2025-08-29 20:11:18.651847 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-08-29 20:11:18.932309 | orchestrator | ok 2025-08-29 20:11:18.940889 | 2025-08-29 20:11:18.941014 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-08-29 20:11:19.348106 | orchestrator | ok 2025-08-29 20:11:19.355684 | 2025-08-29 20:11:19.355870 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-08-29 20:11:19.756675 | orchestrator | ok 2025-08-29 20:11:19.765439 | 2025-08-29 20:11:19.765592 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-08-29 20:11:19.800154 | orchestrator | skipping: Conditional result was False 2025-08-29 20:11:19.807385 | 2025-08-29 20:11:19.807524 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-08-29 20:11:20.236196 | orchestrator -> localhost | changed 2025-08-29 20:11:20.258620 | 2025-08-29 20:11:20.258888 | TASK [add-build-sshkey : Add back temp key] 2025-08-29 20:11:20.618268 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/fe7640c6ad7b40cc86499111616a1a68/work/fe7640c6ad7b40cc86499111616a1a68_id_rsa (zuul-build-sshkey) 2025-08-29 20:11:20.618646 | orchestrator -> localhost | ok: Runtime: 0:00:00.021121 2025-08-29 20:11:20.630171 | 2025-08-29 20:11:20.630323 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-08-29 20:11:21.064087 | orchestrator | ok 2025-08-29 20:11:21.074497 | 2025-08-29 20:11:21.074624 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-08-29 20:11:21.099187 | orchestrator | skipping: Conditional result was False 2025-08-29 20:11:21.168966 | 2025-08-29 20:11:21.169101 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-08-29 20:11:21.564984 | orchestrator | ok 2025-08-29 20:11:21.576303 | 2025-08-29 20:11:21.576459 | TASK [validate-host : Define zuul_info_dir fact] 2025-08-29 20:11:21.629329 | orchestrator | ok 2025-08-29 20:11:21.637519 | 2025-08-29 20:11:21.637656 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-08-29 20:11:21.939327 | orchestrator -> localhost | ok 2025-08-29 20:11:21.956816 | 2025-08-29 20:11:21.956937 | TASK [validate-host : Collect information about the host] 2025-08-29 20:11:23.481776 | orchestrator | ok 2025-08-29 20:11:23.495926 | 2025-08-29 20:11:23.496061 | TASK [validate-host : Sanitize hostname] 2025-08-29 20:11:23.559535 | orchestrator | ok 2025-08-29 20:11:23.566138 | 2025-08-29 20:11:23.566252 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-08-29 20:11:24.157286 | orchestrator -> localhost | changed 2025-08-29 20:11:24.164046 | 2025-08-29 20:11:24.164158 | TASK [validate-host : Collect information about zuul worker] 2025-08-29 20:11:24.597153 | orchestrator | ok 2025-08-29 20:11:24.606028 | 2025-08-29 20:11:24.606175 | TASK [validate-host : Write out all zuul information for each host] 2025-08-29 20:11:25.161146 | orchestrator -> localhost | changed 2025-08-29 20:11:25.180019 | 2025-08-29 20:11:25.180157 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-08-29 20:11:25.499563 | orchestrator | ok 2025-08-29 20:11:25.510017 | 2025-08-29 20:11:25.510237 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-08-29 20:12:42.155369 | orchestrator | changed: 2025-08-29 20:12:42.155648 | orchestrator | .d..t...... src/ 2025-08-29 20:12:42.155693 | orchestrator | .d..t...... src/github.com/ 2025-08-29 20:12:42.155724 | orchestrator | .d..t...... src/github.com/osism/ 2025-08-29 20:12:42.155752 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-08-29 20:12:42.155777 | orchestrator | RedHat.yml 2025-08-29 20:12:42.170104 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-08-29 20:12:42.170121 | orchestrator | RedHat.yml 2025-08-29 20:12:42.170174 | orchestrator | = 1.53.0"... 2025-08-29 20:12:54.796979 | orchestrator | 20:12:54.796 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-08-29 20:12:54.828904 | orchestrator | 20:12:54.828 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-08-29 20:12:55.039201 | orchestrator | 20:12:55.038 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-08-29 20:12:55.702582 | orchestrator | 20:12:55.702 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-08-29 20:12:55.783730 | orchestrator | 20:12:55.783 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-08-29 20:12:56.258772 | orchestrator | 20:12:56.258 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 20:12:56.339895 | orchestrator | 20:12:56.339 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-08-29 20:12:56.819532 | orchestrator | 20:12:56.819 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 20:12:56.819603 | orchestrator | 20:12:56.819 STDOUT terraform: Providers are signed by their developers. 2025-08-29 20:12:56.819610 | orchestrator | 20:12:56.819 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-08-29 20:12:56.819617 | orchestrator | 20:12:56.819 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-08-29 20:12:56.819673 | orchestrator | 20:12:56.819 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-08-29 20:12:56.819731 | orchestrator | 20:12:56.819 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-08-29 20:12:56.819803 | orchestrator | 20:12:56.819 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-08-29 20:12:56.819813 | orchestrator | 20:12:56.819 STDOUT terraform: you run "tofu init" in the future. 2025-08-29 20:12:56.819869 | orchestrator | 20:12:56.819 STDOUT terraform: OpenTofu has been successfully initialized! 2025-08-29 20:12:56.819930 | orchestrator | 20:12:56.819 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-08-29 20:12:56.819966 | orchestrator | 20:12:56.819 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-08-29 20:12:56.819975 | orchestrator | 20:12:56.819 STDOUT terraform: should now work. 2025-08-29 20:12:56.820074 | orchestrator | 20:12:56.819 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-08-29 20:12:56.820129 | orchestrator | 20:12:56.820 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-08-29 20:12:56.820173 | orchestrator | 20:12:56.820 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-08-29 20:12:56.909712 | orchestrator | 20:12:56.909 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-08-29 20:12:56.909787 | orchestrator | 20:12:56.909 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-08-29 20:12:57.151713 | orchestrator | 20:12:57.151 STDOUT terraform: Created and switched to workspace "ci"! 2025-08-29 20:12:57.151798 | orchestrator | 20:12:57.151 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-08-29 20:12:57.151822 | orchestrator | 20:12:57.151 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-08-29 20:12:57.151846 | orchestrator | 20:12:57.151 STDOUT terraform: for this configuration. 2025-08-29 20:12:57.286855 | orchestrator | 20:12:57.286 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-08-29 20:12:57.286946 | orchestrator | 20:12:57.286 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-08-29 20:12:57.384647 | orchestrator | 20:12:57.384 STDOUT terraform: ci.auto.tfvars 2025-08-29 20:12:57.739166 | orchestrator | 20:12:57.737 STDOUT terraform: default_custom.tf 2025-08-29 20:12:58.336848 | orchestrator | 20:12:58.336 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-08-29 20:13:02.274119 | orchestrator | 20:13:02.271 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-08-29 20:13:03.635381 | orchestrator | 20:13:03.635 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 2s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-08-29 20:13:04.024045 | orchestrator | 20:13:04.022 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-08-29 20:13:04.024084 | orchestrator | 20:13:04.022 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-08-29 20:13:04.024090 | orchestrator | 20:13:04.022 STDOUT terraform:  + create 2025-08-29 20:13:04.024095 | orchestrator | 20:13:04.022 STDOUT terraform:  <= read (data resources) 2025-08-29 20:13:04.024102 | orchestrator | 20:13:04.022 STDOUT terraform: OpenTofu will perform the following actions: 2025-08-29 20:13:04.024108 | orchestrator | 20:13:04.022 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-08-29 20:13:04.024115 | orchestrator | 20:13:04.022 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 20:13:04.024121 | orchestrator | 20:13:04.022 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-08-29 20:13:04.024127 | orchestrator | 20:13:04.022 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 20:13:04.024133 | orchestrator | 20:13:04.022 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 20:13:04.024140 | orchestrator | 20:13:04.022 STDOUT terraform:  + file = (known after apply) 2025-08-29 20:13:04.024146 | orchestrator | 20:13:04.022 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.024152 | orchestrator | 20:13:04.022 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.024177 | orchestrator | 20:13:04.022 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 20:13:04.024184 | orchestrator | 20:13:04.022 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 20:13:04.024190 | orchestrator | 20:13:04.022 STDOUT terraform:  + most_recent = true 2025-08-29 20:13:04.024196 | orchestrator | 20:13:04.022 STDOUT terraform:  + name = (known after apply) 2025-08-29 20:13:04.024200 | orchestrator | 20:13:04.022 STDOUT terraform:  + protected = (known after apply) 2025-08-29 20:13:04.024204 | orchestrator | 20:13:04.022 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.024208 | orchestrator | 20:13:04.022 STDOUT terraform:  + schema = (known after apply) 2025-08-29 20:13:04.024212 | orchestrator | 20:13:04.022 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 20:13:04.024216 | orchestrator | 20:13:04.022 STDOUT terraform:  + tags = (known after apply) 2025-08-29 20:13:04.024220 | orchestrator | 20:13:04.022 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 20:13:04.024224 | orchestrator | 20:13:04.022 STDOUT terraform:  } 2025-08-29 20:13:04.024231 | orchestrator | 20:13:04.022 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-08-29 20:13:04.024235 | orchestrator | 20:13:04.022 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 20:13:04.024239 | orchestrator | 20:13:04.022 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-08-29 20:13:04.024243 | orchestrator | 20:13:04.022 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 20:13:04.024247 | orchestrator | 20:13:04.022 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 20:13:04.024255 | orchestrator | 20:13:04.022 STDOUT terraform:  + file = (known after apply) 2025-08-29 20:13:04.024259 | orchestrator | 20:13:04.023 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.024263 | orchestrator | 20:13:04.023 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.024266 | orchestrator | 20:13:04.023 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 20:13:04.024270 | orchestrator | 20:13:04.023 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 20:13:04.024274 | orchestrator | 20:13:04.023 STDOUT terraform:  + most_recent = true 2025-08-29 20:13:04.024278 | orchestrator | 20:13:04.023 STDOUT terraform:  + name = (known after apply) 2025-08-29 20:13:04.024282 | orchestrator | 20:13:04.023 STDOUT terraform:  + protected = (known after apply) 2025-08-29 20:13:04.024285 | orchestrator | 20:13:04.023 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.024300 | orchestrator | 20:13:04.023 STDOUT terraform:  + schema = (known after apply) 2025-08-29 20:13:04.024304 | orchestrator | 20:13:04.023 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 20:13:04.024308 | orchestrator | 20:13:04.023 STDOUT terraform:  + tags = (known after apply) 2025-08-29 20:13:04.024312 | orchestrator | 20:13:04.023 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 20:13:04.024315 | orchestrator | 20:13:04.023 STDOUT terraform:  } 2025-08-29 20:13:04.024319 | orchestrator | 20:13:04.023 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-08-29 20:13:04.024328 | orchestrator | 20:13:04.023 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-08-29 20:13:04.024334 | orchestrator | 20:13:04.023 STDOUT terraform:  + content = (known after apply) 2025-08-29 20:13:04.024340 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 20:13:04.024343 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 20:13:04.024347 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 20:13:04.024351 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 20:13:04.024355 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 20:13:04.024359 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 20:13:04.024363 | orchestrator | 20:13:04.023 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 20:13:04.024367 | orchestrator | 20:13:04.023 STDOUT terraform:  + file_permission = "0644" 2025-08-29 20:13:04.024370 | orchestrator | 20:13:04.023 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-08-29 20:13:04.024374 | orchestrator | 20:13:04.023 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.024378 | orchestrator | 20:13:04.023 STDOUT terraform:  } 2025-08-29 20:13:04.024382 | orchestrator | 20:13:04.023 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-08-29 20:13:04.024386 | orchestrator | 20:13:04.023 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-08-29 20:13:04.024389 | orchestrator | 20:13:04.023 STDOUT terraform:  + content = (known after apply) 2025-08-29 20:13:04.024393 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 20:13:04.024397 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 20:13:04.024401 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 20:13:04.024404 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 20:13:04.024408 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 20:13:04.024414 | orchestrator | 20:13:04.023 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 20:13:04.024418 | orchestrator | 20:13:04.024 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 20:13:04.024422 | orchestrator | 20:13:04.024 STDOUT terraform:  + file_permission = "0644" 2025-08-29 20:13:04.024426 | orchestrator | 20:13:04.024 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-08-29 20:13:04.024430 | orchestrator | 20:13:04.024 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.024433 | orchestrator | 20:13:04.024 STDOUT terraform:  } 2025-08-29 20:13:04.024437 | orchestrator | 20:13:04.024 STDOUT terraform:  # local_file.inventory will be created 2025-08-29 20:13:04.024441 | orchestrator | 20:13:04.024 STDOUT terraform:  + resource "local_file" "inventory" { 2025-08-29 20:13:04.024445 | orchestrator | 20:13:04.024 STDOUT terraform:  + content = (known after apply) 2025-08-29 20:13:04.024453 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 20:13:04.024457 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 20:13:04.024463 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 20:13:04.024466 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 20:13:04.024470 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 20:13:04.024474 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 20:13:04.024478 | orchestrator | 20:13:04.024 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 20:13:04.024483 | orchestrator | 20:13:04.024 STDOUT terraform:  + file_permission = "0644" 2025-08-29 20:13:04.025871 | orchestrator | 20:13:04.024 STDOUT terraform:  + filename = "inventory.ci" 2025-08-29 20:13:04.025897 | orchestrator | 20:13:04.024 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.025903 | orchestrator | 20:13:04.024 STDOUT terraform:  } 2025-08-29 20:13:04.025907 | orchestrator | 20:13:04.024 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-08-29 20:13:04.025911 | orchestrator | 20:13:04.024 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-08-29 20:13:04.025917 | orchestrator | 20:13:04.024 STDOUT terraform:  + content = (sensitive value) 2025-08-29 20:13:04.025921 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 20:13:04.025925 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 20:13:04.025929 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 20:13:04.025933 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 20:13:04.025937 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 20:13:04.025941 | orchestrator | 20:13:04.024 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 20:13:04.025944 | orchestrator | 20:13:04.024 STDOUT terraform:  + directory_permission = "0700" 2025-08-29 20:13:04.025949 | orchestrator | 20:13:04.024 STDOUT terraform:  + file_permission = "0600" 2025-08-29 20:13:04.025952 | orchestrator | 20:13:04.024 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-08-29 20:13:04.025963 | orchestrator | 20:13:04.024 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.025967 | orchestrator | 20:13:04.024 STDOUT terraform:  } 2025-08-29 20:13:04.025971 | orchestrator | 20:13:04.024 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-08-29 20:13:04.025974 | orchestrator | 20:13:04.024 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-08-29 20:13:04.025978 | orchestrator | 20:13:04.024 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.025982 | orchestrator | 20:13:04.024 STDOUT terraform:  } 2025-08-29 20:13:04.025986 | orchestrator | 20:13:04.024 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-08-29 20:13:04.025999 | orchestrator | 20:13:04.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-08-29 20:13:04.026003 | orchestrator | 20:13:04.025 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.026007 | orchestrator | 20:13:04.025 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.026011 | orchestrator | 20:13:04.025 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.026028 | orchestrator | 20:13:04.025 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.026032 | orchestrator | 20:13:04.025 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.026051 | orchestrator | 20:13:04.025 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-08-29 20:13:04.026055 | orchestrator | 20:13:04.025 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.026059 | orchestrator | 20:13:04.025 STDOUT terraform:  + size = 80 2025-08-29 20:13:04.026062 | orchestrator | 20:13:04.025 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.026066 | orchestrator | 20:13:04.025 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.026070 | orchestrator | 20:13:04.025 STDOUT terraform:  } 2025-08-29 20:13:04.026074 | orchestrator | 20:13:04.025 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-08-29 20:13:04.026078 | orchestrator | 20:13:04.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 20:13:04.026082 | orchestrator | 20:13:04.025 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.026093 | orchestrator | 20:13:04.025 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.026097 | orchestrator | 20:13:04.025 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.026101 | orchestrator | 20:13:04.025 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.026104 | orchestrator | 20:13:04.025 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.026108 | orchestrator | 20:13:04.025 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-08-29 20:13:04.026112 | orchestrator | 20:13:04.025 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.026116 | orchestrator | 20:13:04.025 STDOUT terraform:  + size = 80 2025-08-29 20:13:04.026120 | orchestrator | 20:13:04.025 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.026123 | orchestrator | 20:13:04.025 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.026127 | orchestrator | 20:13:04.025 STDOUT terraform:  } 2025-08-29 20:13:04.026131 | orchestrator | 20:13:04.025 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-08-29 20:13:04.026135 | orchestrator | 20:13:04.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 20:13:04.026139 | orchestrator | 20:13:04.025 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.026146 | orchestrator | 20:13:04.025 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.026150 | orchestrator | 20:13:04.025 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.026154 | orchestrator | 20:13:04.025 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.026158 | orchestrator | 20:13:04.025 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.026166 | orchestrator | 20:13:04.026 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-08-29 20:13:04.028823 | orchestrator | 20:13:04.026 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.028846 | orchestrator | 20:13:04.026 STDOUT terraform:  + size = 80 2025-08-29 20:13:04.028851 | orchestrator | 20:13:04.026 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.028855 | orchestrator | 20:13:04.026 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.028860 | orchestrator | 20:13:04.026 STDOUT terraform:  } 2025-08-29 20:13:04.028864 | orchestrator | 20:13:04.026 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-08-29 20:13:04.028868 | orchestrator | 20:13:04.026 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 20:13:04.028872 | orchestrator | 20:13:04.026 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.028876 | orchestrator | 20:13:04.026 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.028880 | orchestrator | 20:13:04.026 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.028884 | orchestrator | 20:13:04.026 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.028887 | orchestrator | 20:13:04.026 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.028891 | orchestrator | 20:13:04.026 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-08-29 20:13:04.028895 | orchestrator | 20:13:04.026 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.028899 | orchestrator | 20:13:04.026 STDOUT terraform:  + size = 80 2025-08-29 20:13:04.028902 | orchestrator | 20:13:04.027 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.028906 | orchestrator | 20:13:04.027 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.028910 | orchestrator | 20:13:04.027 STDOUT terraform:  } 2025-08-29 20:13:04.028914 | orchestrator | 20:13:04.027 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-08-29 20:13:04.028918 | orchestrator | 20:13:04.027 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 20:13:04.028921 | orchestrator | 20:13:04.027 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.028925 | orchestrator | 20:13:04.027 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.028930 | orchestrator | 20:13:04.027 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.028933 | orchestrator | 20:13:04.027 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.028937 | orchestrator | 20:13:04.027 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.028949 | orchestrator | 20:13:04.027 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-08-29 20:13:04.028953 | orchestrator | 20:13:04.027 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.028957 | orchestrator | 20:13:04.027 STDOUT terraform:  + size = 80 2025-08-29 20:13:04.028960 | orchestrator | 20:13:04.027 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.028964 | orchestrator | 20:13:04.027 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.028968 | orchestrator | 20:13:04.027 STDOUT terraform:  } 2025-08-29 20:13:04.028972 | orchestrator | 20:13:04.027 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-08-29 20:13:04.028976 | orchestrator | 20:13:04.027 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 20:13:04.028979 | orchestrator | 20:13:04.027 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.028983 | orchestrator | 20:13:04.027 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.028987 | orchestrator | 20:13:04.027 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.028991 | orchestrator | 20:13:04.027 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.029000 | orchestrator | 20:13:04.027 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.029004 | orchestrator | 20:13:04.027 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-08-29 20:13:04.029008 | orchestrator | 20:13:04.028 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.029012 | orchestrator | 20:13:04.028 STDOUT terraform:  + size = 80 2025-08-29 20:13:04.029015 | orchestrator | 20:13:04.028 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.029019 | orchestrator | 20:13:04.028 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.029023 | orchestrator | 20:13:04.028 STDOUT terraform:  } 2025-08-29 20:13:04.029027 | orchestrator | 20:13:04.028 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-08-29 20:13:04.029031 | orchestrator | 20:13:04.028 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 20:13:04.029070 | orchestrator | 20:13:04.028 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.029074 | orchestrator | 20:13:04.028 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.029083 | orchestrator | 20:13:04.028 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.029087 | orchestrator | 20:13:04.028 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.029091 | orchestrator | 20:13:04.028 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.029095 | orchestrator | 20:13:04.028 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-08-29 20:13:04.029098 | orchestrator | 20:13:04.028 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.029102 | orchestrator | 20:13:04.028 STDOUT terraform:  + size = 80 2025-08-29 20:13:04.029110 | orchestrator | 20:13:04.028 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.029113 | orchestrator | 20:13:04.028 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.029117 | orchestrator | 20:13:04.028 STDOUT terraform:  } 2025-08-29 20:13:04.029121 | orchestrator | 20:13:04.028 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-08-29 20:13:04.029125 | orchestrator | 20:13:04.028 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 20:13:04.029131 | orchestrator | 20:13:04.028 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.029135 | orchestrator | 20:13:04.029 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.029763 | orchestrator | 20:13:04.029 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.029770 | orchestrator | 20:13:04.029 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.029774 | orchestrator | 20:13:04.029 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-08-29 20:13:04.029778 | orchestrator | 20:13:04.029 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.029782 | orchestrator | 20:13:04.029 STDOUT terraform:  + size = 20 2025-08-29 20:13:04.029786 | orchestrator | 20:13:04.029 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.029789 | orchestrator | 20:13:04.029 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.029793 | orchestrator | 20:13:04.029 STDOUT terraform:  } 2025-08-29 20:13:04.029797 | orchestrator | 20:13:04.029 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-08-29 20:13:04.029801 | orchestrator | 20:13:04.029 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 20:13:04.029815 | orchestrator | 20:13:04.029 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.029819 | orchestrator | 20:13:04.029 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.029822 | orchestrator | 20:13:04.029 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.029970 | orchestrator | 20:13:04.029 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.029977 | orchestrator | 20:13:04.029 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-08-29 20:13:04.031320 | orchestrator | 20:13:04.029 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.031410 | orchestrator | 20:13:04.031 STDOUT terraform:  + size = 20 2025-08-29 20:13:04.031455 | orchestrator | 20:13:04.031 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.031470 | orchestrator | 20:13:04.031 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.031482 | orchestrator | 20:13:04.031 STDOUT terraform:  } 2025-08-29 20:13:04.031494 | orchestrator | 20:13:04.031 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-08-29 20:13:04.031506 | orchestrator | 20:13:04.031 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 20:13:04.031521 | orchestrator | 20:13:04.031 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.031551 | orchestrator | 20:13:04.031 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.031563 | orchestrator | 20:13:04.031 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.031578 | orchestrator | 20:13:04.031 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.031592 | orchestrator | 20:13:04.031 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-08-29 20:13:04.031639 | orchestrator | 20:13:04.031 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.031653 | orchestrator | 20:13:04.031 STDOUT terraform:  + size = 20 2025-08-29 20:13:04.031668 | orchestrator | 20:13:04.031 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.031736 | orchestrator | 20:13:04.031 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.031750 | orchestrator | 20:13:04.031 STDOUT terraform:  } 2025-08-29 20:13:04.031765 | orchestrator | 20:13:04.031 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-08-29 20:13:04.031780 | orchestrator | 20:13:04.031 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 20:13:04.031815 | orchestrator | 20:13:04.031 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.031831 | orchestrator | 20:13:04.031 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.031963 | orchestrator | 20:13:04.031 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.031978 | orchestrator | 20:13:04.031 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.031989 | orchestrator | 20:13:04.031 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-08-29 20:13:04.032004 | orchestrator | 20:13:04.031 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.032072 | orchestrator | 20:13:04.031 STDOUT terraform:  + size = 20 2025-08-29 20:13:04.032090 | orchestrator | 20:13:04.031 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.032102 | orchestrator | 20:13:04.032 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.032113 | orchestrator | 20:13:04.032 STDOUT terraform:  } 2025-08-29 20:13:04.032127 | orchestrator | 20:13:04.032 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-08-29 20:13:04.032188 | orchestrator | 20:13:04.032 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 20:13:04.032349 | orchestrator | 20:13:04.032 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.032365 | orchestrator | 20:13:04.032 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.032380 | orchestrator | 20:13:04.032 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.032415 | orchestrator | 20:13:04.032 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.032440 | orchestrator | 20:13:04.032 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-08-29 20:13:04.032482 | orchestrator | 20:13:04.032 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.032504 | orchestrator | 20:13:04.032 STDOUT terraform:  + size = 20 2025-08-29 20:13:04.032520 | orchestrator | 20:13:04.032 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.032553 | orchestrator | 20:13:04.032 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.032565 | orchestrator | 20:13:04.032 STDOUT terraform:  } 2025-08-29 20:13:04.032580 | orchestrator | 20:13:04.032 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-08-29 20:13:04.032643 | orchestrator | 20:13:04.032 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 20:13:04.032663 | orchestrator | 20:13:04.032 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.032699 | orchestrator | 20:13:04.032 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.032714 | orchestrator | 20:13:04.032 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.032756 | orchestrator | 20:13:04.032 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.032787 | orchestrator | 20:13:04.032 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-08-29 20:13:04.032848 | orchestrator | 20:13:04.032 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.032863 | orchestrator | 20:13:04.032 STDOUT terraform:  + size = 20 2025-08-29 20:13:04.032878 | orchestrator | 20:13:04.032 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.032890 | orchestrator | 20:13:04.032 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.032923 | orchestrator | 20:13:04.032 STDOUT terraform:  } 2025-08-29 20:13:04.032939 | orchestrator | 20:13:04.032 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-08-29 20:13:04.032979 | orchestrator | 20:13:04.032 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 20:13:04.033023 | orchestrator | 20:13:04.032 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.033100 | orchestrator | 20:13:04.033 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.033121 | orchestrator | 20:13:04.033 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.033145 | orchestrator | 20:13:04.033 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.033160 | orchestrator | 20:13:04.033 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-08-29 20:13:04.033176 | orchestrator | 20:13:04.033 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.033239 | orchestrator | 20:13:04.033 STDOUT terraform:  + size = 20 2025-08-29 20:13:04.033254 | orchestrator | 20:13:04.033 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.033269 | orchestrator | 20:13:04.033 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.033281 | orchestrator | 20:13:04.033 STDOUT terraform:  } 2025-08-29 20:13:04.033441 | orchestrator | 20:13:04.033 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-08-29 20:13:04.033457 | orchestrator | 20:13:04.033 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 20:13:04.033483 | orchestrator | 20:13:04.033 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.033513 | orchestrator | 20:13:04.033 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.033529 | orchestrator | 20:13:04.033 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.033572 | orchestrator | 20:13:04.033 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.033588 | orchestrator | 20:13:04.033 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-08-29 20:13:04.033664 | orchestrator | 20:13:04.033 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.033686 | orchestrator | 20:13:04.033 STDOUT terraform:  + size = 20 2025-08-29 20:13:04.033701 | orchestrator | 20:13:04.033 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.033713 | orchestrator | 20:13:04.033 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.033743 | orchestrator | 20:13:04.033 STDOUT terraform:  } 2025-08-29 20:13:04.033758 | orchestrator | 20:13:04.033 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-08-29 20:13:04.033799 | orchestrator | 20:13:04.033 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 20:13:04.033815 | orchestrator | 20:13:04.033 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 20:13:04.033856 | orchestrator | 20:13:04.033 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.033890 | orchestrator | 20:13:04.033 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.033906 | orchestrator | 20:13:04.033 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 20:13:04.033956 | orchestrator | 20:13:04.033 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-08-29 20:13:04.034008 | orchestrator | 20:13:04.033 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.034075 | orchestrator | 20:13:04.033 STDOUT terraform:  + size = 20 2025-08-29 20:13:04.034093 | orchestrator | 20:13:04.033 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 20:13:04.034104 | orchestrator | 20:13:04.034 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 20:13:04.034118 | orchestrator | 20:13:04.034 STDOUT terraform:  } 2025-08-29 20:13:04.034170 | orchestrator | 20:13:04.034 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-08-29 20:13:04.034200 | orchestrator | 20:13:04.034 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-08-29 20:13:04.034238 | orchestrator | 20:13:04.034 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 20:13:04.034265 | orchestrator | 20:13:04.034 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 20:13:04.034307 | orchestrator | 20:13:04.034 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 20:13:04.034391 | orchestrator | 20:13:04.034 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.034405 | orchestrator | 20:13:04.034 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.034426 | orchestrator | 20:13:04.034 STDOUT terraform:  + config_drive = true 2025-08-29 20:13:04.034441 | orchestrator | 20:13:04.034 STDOUT terraform:  + created = (known after apply) 2025-08-29 20:13:04.034452 | orchestrator | 20:13:04.034 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 20:13:04.034466 | orchestrator | 20:13:04.034 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-08-29 20:13:04.034480 | orchestrator | 20:13:04.034 STDOUT terraform:  + force_delete = false 2025-08-29 20:13:04.034528 | orchestrator | 20:13:04.034 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 20:13:04.034567 | orchestrator | 20:13:04.034 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.034584 | orchestrator | 20:13:04.034 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.034635 | orchestrator | 20:13:04.034 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 20:13:04.034653 | orchestrator | 20:13:04.034 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 20:13:04.034693 | orchestrator | 20:13:04.034 STDOUT terraform:  + name = "testbed-manager" 2025-08-29 20:13:04.034709 | orchestrator | 20:13:04.034 STDOUT terraform:  + power_state = "active" 2025-08-29 20:13:04.034785 | orchestrator | 20:13:04.034 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.034800 | orchestrator | 20:13:04.034 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 20:13:04.034815 | orchestrator | 20:13:04.034 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 20:13:04.034829 | orchestrator | 20:13:04.034 STDOUT terraform:  + updated = (known after apply) 2025-08-29 20:13:04.034867 | orchestrator | 20:13:04.034 STDOUT terraform:  + user_data = (sensitive value) 2025-08-29 20:13:04.034884 | orchestrator | 20:13:04.034 STDOUT terraform:  + block_device { 2025-08-29 20:13:04.034916 | orchestrator | 20:13:04.034 STDOUT terraform:  + boot_index = 0 2025-08-29 20:13:04.034931 | orchestrator | 20:13:04.034 STDOUT terraform:  + delete_on_termination = false 2025-08-29 20:13:04.034964 | orchestrator | 20:13:04.034 STDOUT terraform:  + destination_type = "volume" 2025-08-29 20:13:04.034980 | orchestrator | 20:13:04.034 STDOUT terraform:  + multiattach = false 2025-08-29 20:13:04.035019 | orchestrator | 20:13:04.034 STDOUT terraform:  + source_type = "volume" 2025-08-29 20:13:04.035058 | orchestrator | 20:13:04.035 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.035075 | orchestrator | 20:13:04.035 STDOUT terraform:  } 2025-08-29 20:13:04.035087 | orchestrator | 20:13:04.035 STDOUT terraform:  + network { 2025-08-29 20:13:04.035102 | orchestrator | 20:13:04.035 STDOUT terraform:  + access_network = false 2025-08-29 20:13:04.035116 | orchestrator | 20:13:04.035 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 20:13:04.035163 | orchestrator | 20:13:04.035 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 20:13:04.035180 | orchestrator | 20:13:04.035 STDOUT terraform:  + mac = (known after apply) 2025-08-29 20:13:04.035228 | orchestrator | 20:13:04.035 STDOUT terraform:  + name = (known after apply) 2025-08-29 20:13:04.035245 | orchestrator | 20:13:04.035 STDOUT terraform:  + port = (known after apply) 2025-08-29 20:13:04.035284 | orchestrator | 20:13:04.035 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.035297 | orchestrator | 20:13:04.035 STDOUT terraform:  } 2025-08-29 20:13:04.035313 | orchestrator | 20:13:04.035 STDOUT terraform:  } 2025-08-29 20:13:04.035337 | orchestrator | 20:13:04.035 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-08-29 20:13:04.035388 | orchestrator | 20:13:04.035 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 20:13:04.035415 | orchestrator | 20:13:04.035 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 20:13:04.035490 | orchestrator | 20:13:04.035 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 20:13:04.035510 | orchestrator | 20:13:04.035 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 20:13:04.035525 | orchestrator | 20:13:04.035 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.035540 | orchestrator | 20:13:04.035 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.035554 | orchestrator | 20:13:04.035 STDOUT terraform:  + config_drive = true 2025-08-29 20:13:04.035601 | orchestrator | 20:13:04.035 STDOUT terraform:  + created = (known after apply) 2025-08-29 20:13:04.035641 | orchestrator | 20:13:04.035 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 20:13:04.035658 | orchestrator | 20:13:04.035 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 20:13:04.035673 | orchestrator | 20:13:04.035 STDOUT terraform:  + force_delete = false 2025-08-29 20:13:04.035716 | orchestrator | 20:13:04.035 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 20:13:04.035756 | orchestrator | 20:13:04.035 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.035798 | orchestrator | 20:13:04.035 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.035829 | orchestrator | 20:13:04.035 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 20:13:04.035849 | orchestrator | 20:13:04.035 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 20:13:04.035890 | orchestrator | 20:13:04.035 STDOUT terraform:  + name = "testbed-node-0" 2025-08-29 20:13:04.035907 | orchestrator | 20:13:04.035 STDOUT terraform:  + power_state = "active" 2025-08-29 20:13:04.035948 | orchestrator | 20:13:04.035 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.035966 | orchestrator | 20:13:04.035 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 20:13:04.036005 | orchestrator | 20:13:04.035 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 20:13:04.036024 | orchestrator | 20:13:04.035 STDOUT terraform:  + updated = (known after apply) 2025-08-29 20:13:04.036204 | orchestrator | 20:13:04.036 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 20:13:04.036249 | orchestrator | 20:13:04.036 STDOUT terraform:  + block_device { 2025-08-29 20:13:04.036263 | orchestrator | 20:13:04.036 STDOUT terraform:  + boot_index = 0 2025-08-29 20:13:04.036268 | orchestrator | 20:13:04.036 STDOUT terraform:  + delete_on_termination = false 2025-08-29 20:13:04.036278 | orchestrator | 20:13:04.036 STDOUT terraform:  + destination_type = "volume" 2025-08-29 20:13:04.036282 | orchestrator | 20:13:04.036 STDOUT terraform:  + multiattach = false 2025-08-29 20:13:04.036286 | orchestrator | 20:13:04.036 STDOUT terraform:  + source_type = "volume" 2025-08-29 20:13:04.036290 | orchestrator | 20:13:04.036 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.036296 | orchestrator | 20:13:04.036 STDOUT terraform:  } 2025-08-29 20:13:04.036301 | orchestrator | 20:13:04.036 STDOUT terraform:  + network { 2025-08-29 20:13:04.036316 | orchestrator | 20:13:04.036 STDOUT terraform:  + access_network = false 2025-08-29 20:13:04.036350 | orchestrator | 20:13:04.036 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 20:13:04.036399 | orchestrator | 20:13:04.036 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 20:13:04.036407 | orchestrator | 20:13:04.036 STDOUT terraform:  + mac = (known after apply) 2025-08-29 20:13:04.036440 | orchestrator | 20:13:04.036 STDOUT terraform:  + name = (known after apply) 2025-08-29 20:13:04.036470 | orchestrator | 20:13:04.036 STDOUT terraform:  + port = (known after apply) 2025-08-29 20:13:04.036501 | orchestrator | 20:13:04.036 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.036507 | orchestrator | 20:13:04.036 STDOUT terraform:  } 2025-08-29 20:13:04.036529 | orchestrator | 20:13:04.036 STDOUT terraform:  } 2025-08-29 20:13:04.036572 | orchestrator | 20:13:04.036 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-08-29 20:13:04.036617 | orchestrator | 20:13:04.036 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 20:13:04.036650 | orchestrator | 20:13:04.036 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 20:13:04.036694 | orchestrator | 20:13:04.036 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 20:13:04.036728 | orchestrator | 20:13:04.036 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 20:13:04.036772 | orchestrator | 20:13:04.036 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.036798 | orchestrator | 20:13:04.036 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.036837 | orchestrator | 20:13:04.036 STDOUT terraform:  + config_drive = true 2025-08-29 20:13:04.036854 | orchestrator | 20:13:04.036 STDOUT terraform:  + created = (known after apply) 2025-08-29 20:13:04.036887 | orchestrator | 20:13:04.036 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 20:13:04.036915 | orchestrator | 20:13:04.036 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 20:13:04.036938 | orchestrator | 20:13:04.036 STDOUT terraform:  + force_delete = false 2025-08-29 20:13:04.036974 | orchestrator | 20:13:04.036 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 20:13:04.037008 | orchestrator | 20:13:04.036 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.037073 | orchestrator | 20:13:04.037 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.037113 | orchestrator | 20:13:04.037 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 20:13:04.037136 | orchestrator | 20:13:04.037 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 20:13:04.037177 | orchestrator | 20:13:04.037 STDOUT terraform:  + name = "testbed-node-1" 2025-08-29 20:13:04.037194 | orchestrator | 20:13:04.037 STDOUT terraform:  + power_state = "active" 2025-08-29 20:13:04.037228 | orchestrator | 20:13:04.037 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.037262 | orchestrator | 20:13:04.037 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 20:13:04.037284 | orchestrator | 20:13:04.037 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 20:13:04.037319 | orchestrator | 20:13:04.037 STDOUT terraform:  + updated = (known after apply) 2025-08-29 20:13:04.037372 | orchestrator | 20:13:04.037 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 20:13:04.037379 | orchestrator | 20:13:04.037 STDOUT terraform:  + block_device { 2025-08-29 20:13:04.037408 | orchestrator | 20:13:04.037 STDOUT terraform:  + boot_index = 0 2025-08-29 20:13:04.037444 | orchestrator | 20:13:04.037 STDOUT terraform:  + delete_on_termination = false 2025-08-29 20:13:04.037478 | orchestrator | 20:13:04.037 STDOUT terraform:  + destination_type = "volume" 2025-08-29 20:13:04.037509 | orchestrator | 20:13:04.037 STDOUT terraform:  + multiattach = false 2025-08-29 20:13:04.037540 | orchestrator | 20:13:04.037 STDOUT terraform:  + source_type = "volume" 2025-08-29 20:13:04.037579 | orchestrator | 20:13:04.037 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.037586 | orchestrator | 20:13:04.037 STDOUT terraform:  } 2025-08-29 20:13:04.037606 | orchestrator | 20:13:04.037 STDOUT terraform:  + network { 2025-08-29 20:13:04.037612 | orchestrator | 20:13:04.037 STDOUT terraform:  + access_network = false 2025-08-29 20:13:04.037648 | orchestrator | 20:13:04.037 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 20:13:04.037678 | orchestrator | 20:13:04.037 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 20:13:04.037712 | orchestrator | 20:13:04.037 STDOUT terraform:  + mac = (known after apply) 2025-08-29 20:13:04.037747 | orchestrator | 20:13:04.037 STDOUT terraform:  + name = (known after apply) 2025-08-29 20:13:04.037778 | orchestrator | 20:13:04.037 STDOUT terraform:  + port = (known after apply) 2025-08-29 20:13:04.037809 | orchestrator | 20:13:04.037 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.037815 | orchestrator | 20:13:04.037 STDOUT terraform:  } 2025-08-29 20:13:04.037831 | orchestrator | 20:13:04.037 STDOUT terraform:  } 2025-08-29 20:13:04.037874 | orchestrator | 20:13:04.037 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-08-29 20:13:04.037918 | orchestrator | 20:13:04.037 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 20:13:04.037953 | orchestrator | 20:13:04.037 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 20:13:04.037986 | orchestrator | 20:13:04.037 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 20:13:04.038028 | orchestrator | 20:13:04.037 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 20:13:04.038090 | orchestrator | 20:13:04.038 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.038118 | orchestrator | 20:13:04.038 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.038140 | orchestrator | 20:13:04.038 STDOUT terraform:  + config_drive = true 2025-08-29 20:13:04.038175 | orchestrator | 20:13:04.038 STDOUT terraform:  + created = (known after apply) 2025-08-29 20:13:04.038209 | orchestrator | 20:13:04.038 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 20:13:04.038238 | orchestrator | 20:13:04.038 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 20:13:04.038261 | orchestrator | 20:13:04.038 STDOUT terraform:  + force_delete = false 2025-08-29 20:13:04.038299 | orchestrator | 20:13:04.038 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 20:13:04.038340 | orchestrator | 20:13:04.038 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.038374 | orchestrator | 20:13:04.038 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.038414 | orchestrator | 20:13:04.038 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 20:13:04.038444 | orchestrator | 20:13:04.038 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 20:13:04.038475 | orchestrator | 20:13:04.038 STDOUT terraform:  + name = "testbed-node-2" 2025-08-29 20:13:04.038500 | orchestrator | 20:13:04.038 STDOUT terraform:  + power_state = "active" 2025-08-29 20:13:04.038534 | orchestrator | 20:13:04.038 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.038568 | orchestrator | 20:13:04.038 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 20:13:04.038591 | orchestrator | 20:13:04.038 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 20:13:04.038628 | orchestrator | 20:13:04.038 STDOUT terraform:  + updated = (known after apply) 2025-08-29 20:13:04.038677 | orchestrator | 20:13:04.038 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 20:13:04.038700 | orchestrator | 20:13:04.038 STDOUT terraform:  + block_device { 2025-08-29 20:13:04.038724 | orchestrator | 20:13:04.038 STDOUT terraform:  + boot_index = 0 2025-08-29 20:13:04.038756 | orchestrator | 20:13:04.038 STDOUT terraform:  + delete_on_termination = false 2025-08-29 20:13:04.038784 | orchestrator | 20:13:04.038 STDOUT terraform:  + destination_type = "volume" 2025-08-29 20:13:04.038812 | orchestrator | 20:13:04.038 STDOUT terraform:  + multiattach = false 2025-08-29 20:13:04.038844 | orchestrator | 20:13:04.038 STDOUT terraform:  + source_type = "volume" 2025-08-29 20:13:04.038882 | orchestrator | 20:13:04.038 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.038896 | orchestrator | 20:13:04.038 STDOUT terraform:  } 2025-08-29 20:13:04.038913 | orchestrator | 20:13:04.038 STDOUT terraform:  + network { 2025-08-29 20:13:04.038937 | orchestrator | 20:13:04.038 STDOUT terraform:  + access_network = false 2025-08-29 20:13:04.038968 | orchestrator | 20:13:04.038 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 20:13:04.038998 | orchestrator | 20:13:04.038 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 20:13:04.039093 | orchestrator | 20:13:04.038 STDOUT terraform:  + mac = (known after apply) 2025-08-29 20:13:04.039106 | orchestrator | 20:13:04.039 STDOUT terraform:  + name = (known after apply) 2025-08-29 20:13:04.039111 | orchestrator | 20:13:04.039 STDOUT terraform:  + port = (known after apply) 2025-08-29 20:13:04.039242 | orchestrator | 20:13:04.039 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.039294 | orchestrator | 20:13:04.039 STDOUT terraform:  } 2025-08-29 20:13:04.039312 | orchestrator | 20:13:04.039 STDOUT terraform:  } 2025-08-29 20:13:04.039325 | orchestrator | 20:13:04.039 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-08-29 20:13:04.039346 | orchestrator | 20:13:04.039 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 20:13:04.039357 | orchestrator | 20:13:04.039 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 20:13:04.039367 | orchestrator | 20:13:04.039 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 20:13:04.039377 | orchestrator | 20:13:04.039 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 20:13:04.039390 | orchestrator | 20:13:04.039 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.039403 | orchestrator | 20:13:04.039 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.039417 | orchestrator | 20:13:04.039 STDOUT terraform:  + config_drive = true 2025-08-29 20:13:04.039493 | orchestrator | 20:13:04.039 STDOUT terraform:  + created = (known after apply) 2025-08-29 20:13:04.039506 | orchestrator | 20:13:04.039 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 20:13:04.039520 | orchestrator | 20:13:04.039 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 20:13:04.039533 | orchestrator | 20:13:04.039 STDOUT terraform:  + force_delete = false 2025-08-29 20:13:04.039567 | orchestrator | 20:13:04.039 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 20:13:04.039582 | orchestrator | 20:13:04.039 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.039635 | orchestrator | 20:13:04.039 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.039685 | orchestrator | 20:13:04.039 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 20:13:04.039698 | orchestrator | 20:13:04.039 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 20:13:04.039713 | orchestrator | 20:13:04.039 STDOUT terraform:  + name = "testbed-node-3" 2025-08-29 20:13:04.039751 | orchestrator | 20:13:04.039 STDOUT terraform:  + power_state = "active" 2025-08-29 20:13:04.039781 | orchestrator | 20:13:04.039 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.039796 | orchestrator | 20:13:04.039 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 20:13:04.039848 | orchestrator | 20:13:04.039 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 20:13:04.039865 | orchestrator | 20:13:04.039 STDOUT terraform:  + updated = (known after apply) 2025-08-29 20:13:04.039921 | orchestrator | 20:13:04.039 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 20:13:04.039943 | orchestrator | 20:13:04.039 STDOUT terraform:  + block_device { 2025-08-29 20:13:04.039959 | orchestrator | 20:13:04.039 STDOUT terraform:  + boot_index = 0 2025-08-29 20:13:04.039974 | orchestrator | 20:13:04.039 STDOUT terraform:  + delete_on_termination = false 2025-08-29 20:13:04.040026 | orchestrator | 20:13:04.039 STDOUT terraform:  + destination_type = "volume" 2025-08-29 20:13:04.040064 | orchestrator | 20:13:04.039 STDOUT terraform:  + multiattach = false 2025-08-29 20:13:04.040079 | orchestrator | 20:13:04.040 STDOUT terraform:  + source_type = "volume" 2025-08-29 20:13:04.040093 | orchestrator | 20:13:04.040 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.040108 | orchestrator | 20:13:04.040 STDOUT terraform:  } 2025-08-29 20:13:04.040123 | orchestrator | 20:13:04.040 STDOUT terraform:  + network { 2025-08-29 20:13:04.040138 | orchestrator | 20:13:04.040 STDOUT terraform:  + access_network = false 2025-08-29 20:13:04.040153 | orchestrator | 20:13:04.040 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 20:13:04.040205 | orchestrator | 20:13:04.040 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 20:13:04.040221 | orchestrator | 20:13:04.040 STDOUT terraform:  + mac = (known after apply) 2025-08-29 20:13:04.040270 | orchestrator | 20:13:04.040 STDOUT terraform:  + name = (known after apply) 2025-08-29 20:13:04.040287 | orchestrator | 20:13:04.040 STDOUT terraform:  + port = (known after apply) 2025-08-29 20:13:04.040310 | orchestrator | 20:13:04.040 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.040326 | orchestrator | 20:13:04.040 STDOUT terraform:  } 2025-08-29 20:13:04.040337 | orchestrator | 20:13:04.040 STDOUT terraform:  } 2025-08-29 20:13:04.040386 | orchestrator | 20:13:04.040 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-08-29 20:13:04.040426 | orchestrator | 20:13:04.040 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 20:13:04.040442 | orchestrator | 20:13:04.040 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 20:13:04.040493 | orchestrator | 20:13:04.040 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 20:13:04.040509 | orchestrator | 20:13:04.040 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 20:13:04.040566 | orchestrator | 20:13:04.040 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.040583 | orchestrator | 20:13:04.040 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.040606 | orchestrator | 20:13:04.040 STDOUT terraform:  + config_drive = true 2025-08-29 20:13:04.040620 | orchestrator | 20:13:04.040 STDOUT terraform:  + created = (known after apply) 2025-08-29 20:13:04.040671 | orchestrator | 20:13:04.040 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 20:13:04.040687 | orchestrator | 20:13:04.040 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 20:13:04.040702 | orchestrator | 20:13:04.040 STDOUT terraform:  + force_delete = false 2025-08-29 20:13:04.040739 | orchestrator | 20:13:04.040 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 20:13:04.040780 | orchestrator | 20:13:04.040 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.040805 | orchestrator | 20:13:04.040 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.040856 | orchestrator | 20:13:04.040 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 20:13:04.040873 | orchestrator | 20:13:04.040 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 20:13:04.040888 | orchestrator | 20:13:04.040 STDOUT terraform:  + name = "testbed-node-4" 2025-08-29 20:13:04.040927 | orchestrator | 20:13:04.040 STDOUT terraform:  + power_state = "active" 2025-08-29 20:13:04.040944 | orchestrator | 20:13:04.040 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.040980 | orchestrator | 20:13:04.040 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 20:13:04.040997 | orchestrator | 20:13:04.040 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 20:13:04.041097 | orchestrator | 20:13:04.040 STDOUT terraform:  + updated = (known after apply) 2025-08-29 20:13:04.041139 | orchestrator | 20:13:04.041 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 20:13:04.041164 | orchestrator | 20:13:04.041 STDOUT terraform:  + block_device { 2025-08-29 20:13:04.041189 | orchestrator | 20:13:04.041 STDOUT terraform:  + boot_index = 0 2025-08-29 20:13:04.041214 | orchestrator | 20:13:04.041 STDOUT terraform:  + delete_on_termination = false 2025-08-29 20:13:04.041229 | orchestrator | 20:13:04.041 STDOUT terraform:  + destination_type = "volume" 2025-08-29 20:13:04.041271 | orchestrator | 20:13:04.041 STDOUT terraform:  + multiattach = false 2025-08-29 20:13:04.041288 | orchestrator | 20:13:04.041 STDOUT terraform:  + source_type = "volume" 2025-08-29 20:13:04.041337 | orchestrator | 20:13:04.041 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.041350 | orchestrator | 20:13:04.041 STDOUT terraform:  } 2025-08-29 20:13:04.041365 | orchestrator | 20:13:04.041 STDOUT terraform:  + network { 2025-08-29 20:13:04.041380 | orchestrator | 20:13:04.041 STDOUT terraform:  + access_network = false 2025-08-29 20:13:04.041394 | orchestrator | 20:13:04.041 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 20:13:04.041432 | orchestrator | 20:13:04.041 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 20:13:04.041448 | orchestrator | 20:13:04.041 STDOUT terraform:  + mac = (known after apply) 2025-08-29 20:13:04.041484 | orchestrator | 20:13:04.041 STDOUT terraform:  + name = (known after apply) 2025-08-29 20:13:04.041524 | orchestrator | 20:13:04.041 STDOUT terraform:  + port = (known after apply) 2025-08-29 20:13:04.041564 | orchestrator | 20:13:04.041 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.041577 | orchestrator | 20:13:04.041 STDOUT terraform:  } 2025-08-29 20:13:04.041591 | orchestrator | 20:13:04.041 STDOUT terraform:  } 2025-08-29 20:13:04.041630 | orchestrator | 20:13:04.041 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-08-29 20:13:04.041669 | orchestrator | 20:13:04.041 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 20:13:04.041686 | orchestrator | 20:13:04.041 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 20:13:04.041741 | orchestrator | 20:13:04.041 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 20:13:04.041758 | orchestrator | 20:13:04.041 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 20:13:04.041817 | orchestrator | 20:13:04.041 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.041835 | orchestrator | 20:13:04.041 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 20:13:04.041847 | orchestrator | 20:13:04.041 STDOUT terraform:  + config_drive = true 2025-08-29 20:13:04.041896 | orchestrator | 20:13:04.041 STDOUT terraform:  + created = (known after apply) 2025-08-29 20:13:04.041913 | orchestrator | 20:13:04.041 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 20:13:04.041934 | orchestrator | 20:13:04.041 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 20:13:04.041954 | orchestrator | 20:13:04.041 STDOUT terraform:  + force_delete = false 2025-08-29 20:13:04.041995 | orchestrator | 20:13:04.041 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 20:13:04.042086 | orchestrator | 20:13:04.041 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.049097 | orchestrator | 20:13:04.042 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 20:13:04.049175 | orchestrator | 20:13:04.042 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 20:13:04.049185 | orchestrator | 20:13:04.042 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 20:13:04.049194 | orchestrator | 20:13:04.042 STDOUT terraform:  + name = "testbed-node-5" 2025-08-29 20:13:04.049202 | orchestrator | 20:13:04.042 STDOUT terraform:  + power_state = "active" 2025-08-29 20:13:04.049210 | orchestrator | 20:13:04.042 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.049218 | orchestrator | 20:13:04.042 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 20:13:04.049226 | orchestrator | 20:13:04.042 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 20:13:04.049234 | orchestrator | 20:13:04.042 STDOUT terraform:  + updated = (known after apply) 2025-08-29 20:13:04.049242 | orchestrator | 20:13:04.042 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 20:13:04.049250 | orchestrator | 20:13:04.042 STDOUT terraform:  + block_device { 2025-08-29 20:13:04.049276 | orchestrator | 20:13:04.042 STDOUT terraform:  + boot_index = 0 2025-08-29 20:13:04.049284 | orchestrator | 20:13:04.042 STDOUT terraform:  + delete_on_termination = false 2025-08-29 20:13:04.049292 | orchestrator | 20:13:04.042 STDOUT terraform:  + destination_type = "volume" 2025-08-29 20:13:04.049300 | orchestrator | 20:13:04.042 STDOUT terraform:  + multiattach = false 2025-08-29 20:13:04.049308 | orchestrator | 20:13:04.042 STDOUT terraform:  + source_type = "volume" 2025-08-29 20:13:04.049315 | orchestrator | 20:13:04.042 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.049324 | orchestrator | 20:13:04.042 STDOUT terraform:  } 2025-08-29 20:13:04.049332 | orchestrator | 20:13:04.042 STDOUT terraform:  + network { 2025-08-29 20:13:04.049340 | orchestrator | 20:13:04.042 STDOUT terraform:  + access_network = false 2025-08-29 20:13:04.049347 | orchestrator | 20:13:04.042 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 20:13:04.049355 | orchestrator | 20:13:04.042 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 20:13:04.049363 | orchestrator | 20:13:04.042 STDOUT terraform:  + mac = (known after apply) 2025-08-29 20:13:04.049371 | orchestrator | 20:13:04.043 STDOUT terraform:  + name = (known after apply) 2025-08-29 20:13:04.049379 | orchestrator | 20:13:04.043 STDOUT terraform:  + port = (known after apply) 2025-08-29 20:13:04.049387 | orchestrator | 20:13:04.043 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 20:13:04.049395 | orchestrator | 20:13:04.043 STDOUT terraform:  } 2025-08-29 20:13:04.049403 | orchestrator | 20:13:04.043 STDOUT terraform:  } 2025-08-29 20:13:04.049411 | orchestrator | 20:13:04.043 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-08-29 20:13:04.049419 | orchestrator | 20:13:04.043 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-08-29 20:13:04.049427 | orchestrator | 20:13:04.043 STDOUT terraform:  + fingerprint = (known after apply) 2025-08-29 20:13:04.049434 | orchestrator | 20:13:04.043 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.049442 | orchestrator | 20:13:04.043 STDOUT terraform:  + name = "testbed" 2025-08-29 20:13:04.049450 | orchestrator | 20:13:04.043 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 20:13:04.049458 | orchestrator | 20:13:04.043 STDOUT terraform:  + public_key = (known after apply) 2025-08-29 20:13:04.049466 | orchestrator | 20:13:04.043 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.049485 | orchestrator | 20:13:04.043 STDOUT terraform:  + user_id = (known after apply) 2025-08-29 20:13:04.049494 | orchestrator | 20:13:04.043 STDOUT terraform:  } 2025-08-29 20:13:04.049514 | orchestrator | 20:13:04.043 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-08-29 20:13:04.049523 | orchestrator | 20:13:04.043 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 20:13:04.049531 | orchestrator | 20:13:04.043 STDOUT terraform:  + device = (known after apply) 2025-08-29 20:13:04.049539 | orchestrator | 20:13:04.043 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.049552 | orchestrator | 20:13:04.043 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 20:13:04.049560 | orchestrator | 20:13:04.043 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.049568 | orchestrator | 20:13:04.043 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 20:13:04.049576 | orchestrator | 20:13:04.043 STDOUT terraform:  } 2025-08-29 20:13:04.049584 | orchestrator | 20:13:04.043 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-08-29 20:13:04.049592 | orchestrator | 20:13:04.043 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 20:13:04.049600 | orchestrator | 20:13:04.043 STDOUT terraform:  + device = (known after apply) 2025-08-29 20:13:04.049608 | orchestrator | 20:13:04.043 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.049616 | orchestrator | 20:13:04.043 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 20:13:04.049623 | orchestrator | 20:13:04.043 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.049631 | orchestrator | 20:13:04.043 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 20:13:04.049639 | orchestrator | 20:13:04.043 STDOUT terraform:  } 2025-08-29 20:13:04.049647 | orchestrator | 20:13:04.043 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-08-29 20:13:04.049655 | orchestrator | 20:13:04.043 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 20:13:04.049663 | orchestrator | 20:13:04.043 STDOUT terraform:  + device = (known after apply) 2025-08-29 20:13:04.049670 | orchestrator | 20:13:04.043 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.049678 | orchestrator | 20:13:04.043 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 20:13:04.049686 | orchestrator | 20:13:04.043 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.049694 | orchestrator | 20:13:04.044 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 20:13:04.049702 | orchestrator | 20:13:04.044 STDOUT terraform:  } 2025-08-29 20:13:04.049710 | orchestrator | 20:13:04.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-08-29 20:13:04.049718 | orchestrator | 20:13:04.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 20:13:04.049726 | orchestrator | 20:13:04.044 STDOUT terraform:  + device = (known after apply) 2025-08-29 20:13:04.049734 | orchestrator | 20:13:04.044 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.049742 | orchestrator | 20:13:04.044 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 20:13:04.049749 | orchestrator | 20:13:04.044 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.049757 | orchestrator | 20:13:04.044 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 20:13:04.049765 | orchestrator | 20:13:04.044 STDOUT terraform:  } 2025-08-29 20:13:04.049773 | orchestrator | 20:13:04.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-08-29 20:13:04.049787 | orchestrator | 20:13:04.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 20:13:04.049795 | orchestrator | 20:13:04.044 STDOUT terraform:  + device = (known after apply) 2025-08-29 20:13:04.049808 | orchestrator | 20:13:04.044 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.049816 | orchestrator | 20:13:04.044 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 20:13:04.049831 | orchestrator | 20:13:04.044 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.049840 | orchestrator | 20:13:04.044 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 20:13:04.049848 | orchestrator | 20:13:04.044 STDOUT terraform:  } 2025-08-29 20:13:04.049856 | orchestrator | 20:13:04.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-08-29 20:13:04.049864 | orchestrator | 20:13:04.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 20:13:04.049872 | orchestrator | 20:13:04.044 STDOUT terraform:  + device = (known after apply) 2025-08-29 20:13:04.049880 | orchestrator | 20:13:04.044 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.049888 | orchestrator | 20:13:04.044 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 20:13:04.049895 | orchestrator | 20:13:04.044 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.049903 | orchestrator | 20:13:04.044 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 20:13:04.049911 | orchestrator | 20:13:04.044 STDOUT terraform:  } 2025-08-29 20:13:04.049919 | orchestrator | 20:13:04.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-08-29 20:13:04.049927 | orchestrator | 20:13:04.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 20:13:04.049935 | orchestrator | 20:13:04.044 STDOUT terraform:  + device = (known after apply) 2025-08-29 20:13:04.049943 | orchestrator | 20:13:04.044 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.049951 | orchestrator | 20:13:04.044 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 20:13:04.049959 | orchestrator | 20:13:04.045 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.049967 | orchestrator | 20:13:04.045 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 20:13:04.049975 | orchestrator | 20:13:04.045 STDOUT terraform:  } 2025-08-29 20:13:04.049983 | orchestrator | 20:13:04.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-08-29 20:13:04.049991 | orchestrator | 20:13:04.045 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 20:13:04.049999 | orchestrator | 20:13:04.045 STDOUT terraform:  + device = (known after apply) 2025-08-29 20:13:04.050007 | orchestrator | 20:13:04.045 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.050057 | orchestrator | 20:13:04.045 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 20:13:04.050067 | orchestrator | 20:13:04.045 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.050081 | orchestrator | 20:13:04.045 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 20:13:04.050089 | orchestrator | 20:13:04.045 STDOUT terraform:  } 2025-08-29 20:13:04.050097 | orchestrator | 20:13:04.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-08-29 20:13:04.050105 | orchestrator | 20:13:04.045 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 20:13:04.050113 | orchestrator | 20:13:04.045 STDOUT terraform:  + device = (known after apply) 2025-08-29 20:13:04.050121 | orchestrator | 20:13:04.045 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.050129 | orchestrator | 20:13:04.045 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 20:13:04.050137 | orchestrator | 20:13:04.045 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.050145 | orchestrator | 20:13:04.045 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 20:13:04.050153 | orchestrator | 20:13:04.045 STDOUT terraform:  } 2025-08-29 20:13:04.050165 | orchestrator | 20:13:04.045 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-08-29 20:13:04.050175 | orchestrator | 20:13:04.045 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-08-29 20:13:04.050189 | orchestrator | 20:13:04.045 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 20:13:04.050197 | orchestrator | 20:13:04.045 STDOUT terraform:  + floating_ip = (known after apply) 2025-08-29 20:13:04.050205 | orchestrator | 20:13:04.045 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.050213 | orchestrator | 20:13:04.045 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 20:13:04.050221 | orchestrator | 20:13:04.045 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.050229 | orchestrator | 20:13:04.045 STDOUT terraform:  } 2025-08-29 20:13:04.050237 | orchestrator | 20:13:04.045 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-08-29 20:13:04.050245 | orchestrator | 20:13:04.045 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-08-29 20:13:04.050253 | orchestrator | 20:13:04.045 STDOUT terraform:  + address = (known after apply) 2025-08-29 20:13:04.050261 | orchestrator | 20:13:04.045 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.050268 | orchestrator | 20:13:04.045 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 20:13:04.050276 | orchestrator | 20:13:04.045 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 20:13:04.050284 | orchestrator | 20:13:04.045 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 20:13:04.050292 | orchestrator | 20:13:04.048 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.050300 | orchestrator | 20:13:04.048 STDOUT terraform:  + pool = "public" 2025-08-29 20:13:04.050308 | orchestrator | 20:13:04.048 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 20:13:04.050316 | orchestrator | 20:13:04.048 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.050324 | orchestrator | 20:13:04.048 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 20:13:04.050336 | orchestrator | 20:13:04.048 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.050344 | orchestrator | 20:13:04.048 STDOUT terraform:  } 2025-08-29 20:13:04.050352 | orchestrator | 20:13:04.048 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-08-29 20:13:04.050360 | orchestrator | 20:13:04.048 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-08-29 20:13:04.050368 | orchestrator | 20:13:04.048 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 20:13:04.050376 | orchestrator | 20:13:04.048 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.050384 | orchestrator | 20:13:04.048 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 20:13:04.050391 | orchestrator | 20:13:04.048 STDOUT terraform:  + "nova", 2025-08-29 20:13:04.050399 | orchestrator | 20:13:04.048 STDOUT terraform:  ] 2025-08-29 20:13:04.050407 | orchestrator | 20:13:04.048 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 20:13:04.050415 | orchestrator | 20:13:04.048 STDOUT terraform:  + external = (known after apply) 2025-08-29 20:13:04.050423 | orchestrator | 20:13:04.048 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.050431 | orchestrator | 20:13:04.049 STDOUT terraform:  + mtu = (known after apply) 2025-08-29 20:13:04.050439 | orchestrator | 20:13:04.049 STDOUT terraform:  + name = "net-testbed-management" 2025-08-29 20:13:04.050446 | orchestrator | 20:13:04.049 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 20:13:04.050454 | orchestrator | 20:13:04.049 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 20:13:04.050462 | orchestrator | 20:13:04.049 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.050470 | orchestrator | 20:13:04.049 STDOUT terraform:  + shared = (known after apply) 2025-08-29 20:13:04.050478 | orchestrator | 20:13:04.049 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.050490 | orchestrator | 20:13:04.049 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-08-29 20:13:04.050498 | orchestrator | 20:13:04.049 STDOUT terraform:  + segments (known after apply) 2025-08-29 20:13:04.050506 | orchestrator | 20:13:04.049 STDOUT terraform:  } 2025-08-29 20:13:04.050514 | orchestrator | 20:13:04.049 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-08-29 20:13:04.050522 | orchestrator | 20:13:04.049 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-08-29 20:13:04.050530 | orchestrator | 20:13:04.049 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 20:13:04.050538 | orchestrator | 20:13:04.049 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 20:13:04.050545 | orchestrator | 20:13:04.049 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 20:13:04.050553 | orchestrator | 20:13:04.049 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.050576 | orchestrator | 20:13:04.049 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 20:13:04.050590 | orchestrator | 20:13:04.049 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 20:13:04.050598 | orchestrator | 20:13:04.049 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 20:13:04.050606 | orchestrator | 20:13:04.049 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 20:13:04.050613 | orchestrator | 20:13:04.049 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.050621 | orchestrator | 20:13:04.049 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 20:13:04.050629 | orchestrator | 20:13:04.049 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 20:13:04.050637 | orchestrator | 20:13:04.049 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 20:13:04.050645 | orchestrator | 20:13:04.049 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 20:13:04.050653 | orchestrator | 20:13:04.049 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.050660 | orchestrator | 20:13:04.049 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 20:13:04.050668 | orchestrator | 20:13:04.049 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.050676 | orchestrator | 20:13:04.049 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.050684 | orchestrator | 20:13:04.049 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 20:13:04.050692 | orchestrator | 20:13:04.050 STDOUT terraform:  } 2025-08-29 20:13:04.050700 | orchestrator | 20:13:04.050 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.050708 | orchestrator | 20:13:04.050 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 20:13:04.050716 | orchestrator | 20:13:04.050 STDOUT terraform:  } 2025-08-29 20:13:04.050724 | orchestrator | 20:13:04.050 STDOUT terraform:  + binding (known after apply) 2025-08-29 20:13:04.050732 | orchestrator | 20:13:04.050 STDOUT terraform:  + fixed_ip { 2025-08-29 20:13:04.050739 | orchestrator | 20:13:04.050 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-08-29 20:13:04.050747 | orchestrator | 20:13:04.050 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 20:13:04.050755 | orchestrator | 20:13:04.050 STDOUT terraform:  } 2025-08-29 20:13:04.050763 | orchestrator | 20:13:04.050 STDOUT terraform:  } 2025-08-29 20:13:04.050771 | orchestrator | 20:13:04.050 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-08-29 20:13:04.050779 | orchestrator | 20:13:04.050 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 20:13:04.050791 | orchestrator | 20:13:04.050 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 20:13:04.050799 | orchestrator | 20:13:04.050 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 20:13:04.050807 | orchestrator | 20:13:04.050 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 20:13:04.050819 | orchestrator | 20:13:04.050 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.050827 | orchestrator | 20:13:04.050 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 20:13:04.050841 | orchestrator | 20:13:04.050 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 20:13:04.050849 | orchestrator | 20:13:04.050 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 20:13:04.050857 | orchestrator | 20:13:04.050 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 20:13:04.050865 | orchestrator | 20:13:04.050 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.050872 | orchestrator | 20:13:04.050 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 20:13:04.050880 | orchestrator | 20:13:04.050 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 20:13:04.050888 | orchestrator | 20:13:04.050 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 20:13:04.050896 | orchestrator | 20:13:04.050 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 20:13:04.050904 | orchestrator | 20:13:04.050 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.050915 | orchestrator | 20:13:04.050 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 20:13:04.050923 | orchestrator | 20:13:04.050 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.050931 | orchestrator | 20:13:04.050 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.050939 | orchestrator | 20:13:04.050 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 20:13:04.050950 | orchestrator | 20:13:04.050 STDOUT terraform:  } 2025-08-29 20:13:04.050958 | orchestrator | 20:13:04.050 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.050969 | orchestrator | 20:13:04.050 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 20:13:04.050979 | orchestrator | 20:13:04.050 STDOUT terraform:  } 2025-08-29 20:13:04.050990 | orchestrator | 20:13:04.050 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.054751 | orchestrator | 20:13:04.050 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 20:13:04.055012 | orchestrator | 20:13:04.051 STDOUT terraform:  } 2025-08-29 20:13:04.055032 | orchestrator | 20:13:04.051 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055050 | orchestrator | 20:13:04.051 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 20:13:04.055054 | orchestrator | 20:13:04.051 STDOUT terraform:  } 2025-08-29 20:13:04.055058 | orchestrator | 20:13:04.051 STDOUT terraform:  + binding (known after apply) 2025-08-29 20:13:04.055063 | orchestrator | 20:13:04.051 STDOUT terraform:  + fixed_ip { 2025-08-29 20:13:04.055067 | orchestrator | 20:13:04.051 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-08-29 20:13:04.055071 | orchestrator | 20:13:04.051 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 20:13:04.055075 | orchestrator | 20:13:04.051 STDOUT terraform:  } 2025-08-29 20:13:04.055079 | orchestrator | 20:13:04.051 STDOUT terraform:  } 2025-08-29 20:13:04.055083 | orchestrator | 20:13:04.051 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-08-29 20:13:04.055087 | orchestrator | 20:13:04.051 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 20:13:04.055102 | orchestrator | 20:13:04.051 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 20:13:04.055106 | orchestrator | 20:13:04.051 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 20:13:04.055110 | orchestrator | 20:13:04.051 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 20:13:04.055114 | orchestrator | 20:13:04.051 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.055118 | orchestrator | 20:13:04.051 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 20:13:04.055122 | orchestrator | 20:13:04.051 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 20:13:04.055136 | orchestrator | 20:13:04.051 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 20:13:04.055140 | orchestrator | 20:13:04.051 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 20:13:04.055143 | orchestrator | 20:13:04.052 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.055147 | orchestrator | 20:13:04.052 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 20:13:04.055151 | orchestrator | 20:13:04.052 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 20:13:04.055154 | orchestrator | 20:13:04.052 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 20:13:04.055158 | orchestrator | 20:13:04.052 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 20:13:04.055162 | orchestrator | 20:13:04.052 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.055166 | orchestrator | 20:13:04.052 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 20:13:04.055169 | orchestrator | 20:13:04.052 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.055173 | orchestrator | 20:13:04.052 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055177 | orchestrator | 20:13:04.052 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 20:13:04.055181 | orchestrator | 20:13:04.052 STDOUT terraform:  } 2025-08-29 20:13:04.055185 | orchestrator | 20:13:04.052 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055189 | orchestrator | 20:13:04.052 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 20:13:04.055192 | orchestrator | 20:13:04.052 STDOUT terraform:  } 2025-08-29 20:13:04.055196 | orchestrator | 20:13:04.052 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055200 | orchestrator | 20:13:04.052 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 20:13:04.055204 | orchestrator | 20:13:04.052 STDOUT terraform:  } 2025-08-29 20:13:04.055208 | orchestrator | 20:13:04.052 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055220 | orchestrator | 20:13:04.052 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 20:13:04.055224 | orchestrator | 20:13:04.052 STDOUT terraform:  } 2025-08-29 20:13:04.055228 | orchestrator | 20:13:04.052 STDOUT terraform:  + binding (known after apply) 2025-08-29 20:13:04.055232 | orchestrator | 20:13:04.052 STDOUT terraform:  + fixed_ip { 2025-08-29 20:13:04.055240 | orchestrator | 20:13:04.052 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-08-29 20:13:04.055243 | orchestrator | 20:13:04.052 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 20:13:04.055247 | orchestrator | 20:13:04.052 STDOUT terraform:  } 2025-08-29 20:13:04.055251 | orchestrator | 20:13:04.052 STDOUT terraform:  } 2025-08-29 20:13:04.055255 | orchestrator | 20:13:04.052 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-08-29 20:13:04.055258 | orchestrator | 20:13:04.052 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 20:13:04.055262 | orchestrator | 20:13:04.053 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 20:13:04.055266 | orchestrator | 20:13:04.053 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 20:13:04.055275 | orchestrator | 20:13:04.053 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 20:13:04.055279 | orchestrator | 20:13:04.053 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.055283 | orchestrator | 20:13:04.053 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 20:13:04.055287 | orchestrator | 20:13:04.053 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 20:13:04.055293 | orchestrator | 20:13:04.053 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 20:13:04.055297 | orchestrator | 20:13:04.053 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 20:13:04.055300 | orchestrator | 20:13:04.053 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.055304 | orchestrator | 20:13:04.053 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 20:13:04.055308 | orchestrator | 20:13:04.053 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 20:13:04.055312 | orchestrator | 20:13:04.053 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 20:13:04.055315 | orchestrator | 20:13:04.053 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 20:13:04.055319 | orchestrator | 20:13:04.053 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.055323 | orchestrator | 20:13:04.053 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 20:13:04.055327 | orchestrator | 20:13:04.053 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.055330 | orchestrator | 20:13:04.053 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055334 | orchestrator | 20:13:04.053 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 20:13:04.055338 | orchestrator | 20:13:04.053 STDOUT terraform:  } 2025-08-29 20:13:04.055342 | orchestrator | 20:13:04.053 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055346 | orchestrator | 20:13:04.053 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 20:13:04.055349 | orchestrator | 20:13:04.053 STDOUT terraform:  } 2025-08-29 20:13:04.055353 | orchestrator | 20:13:04.053 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055357 | orchestrator | 20:13:04.053 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 20:13:04.055364 | orchestrator | 20:13:04.053 STDOUT terraform:  } 2025-08-29 20:13:04.055368 | orchestrator | 20:13:04.053 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055371 | orchestrator | 20:13:04.053 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 20:13:04.055375 | orchestrator | 20:13:04.053 STDOUT terraform:  } 2025-08-29 20:13:04.055379 | orchestrator | 20:13:04.053 STDOUT terraform:  + binding (known after apply) 2025-08-29 20:13:04.055387 | orchestrator | 20:13:04.053 STDOUT terraform:  + fixed_ip { 2025-08-29 20:13:04.055391 | orchestrator | 20:13:04.053 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-08-29 20:13:04.055395 | orchestrator | 20:13:04.053 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 20:13:04.055398 | orchestrator | 20:13:04.053 STDOUT terraform:  } 2025-08-29 20:13:04.055402 | orchestrator | 20:13:04.053 STDOUT terraform:  } 2025-08-29 20:13:04.055406 | orchestrator | 20:13:04.053 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-08-29 20:13:04.055410 | orchestrator | 20:13:04.053 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 20:13:04.055414 | orchestrator | 20:13:04.053 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 20:13:04.055417 | orchestrator | 20:13:04.053 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 20:13:04.055421 | orchestrator | 20:13:04.053 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 20:13:04.055425 | orchestrator | 20:13:04.053 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.055429 | orchestrator | 20:13:04.054 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 20:13:04.055432 | orchestrator | 20:13:04.054 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 20:13:04.055436 | orchestrator | 20:13:04.054 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 20:13:04.055440 | orchestrator | 20:13:04.054 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 20:13:04.055449 | orchestrator | 20:13:04.054 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.055453 | orchestrator | 20:13:04.054 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 20:13:04.055457 | orchestrator | 20:13:04.054 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 20:13:04.055461 | orchestrator | 20:13:04.054 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 20:13:04.055465 | orchestrator | 20:13:04.054 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 20:13:04.055468 | orchestrator | 20:13:04.054 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.055472 | orchestrator | 20:13:04.054 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 20:13:04.055476 | orchestrator | 20:13:04.054 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.055480 | orchestrator | 20:13:04.054 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055483 | orchestrator | 20:13:04.054 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 20:13:04.055491 | orchestrator | 20:13:04.054 STDOUT terraform:  } 2025-08-29 20:13:04.055494 | orchestrator | 20:13:04.054 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055498 | orchestrator | 20:13:04.054 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 20:13:04.055502 | orchestrator | 20:13:04.054 STDOUT terraform:  } 2025-08-29 20:13:04.055506 | orchestrator | 20:13:04.054 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055509 | orchestrator | 20:13:04.054 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 20:13:04.055513 | orchestrator | 20:13:04.054 STDOUT terraform:  } 2025-08-29 20:13:04.055517 | orchestrator | 20:13:04.054 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055521 | orchestrator | 20:13:04.054 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 20:13:04.055524 | orchestrator | 20:13:04.054 STDOUT terraform:  } 2025-08-29 20:13:04.055528 | orchestrator | 20:13:04.054 STDOUT terraform:  + binding (known after apply) 2025-08-29 20:13:04.055532 | orchestrator | 20:13:04.054 STDOUT terraform:  + fixed_ip { 2025-08-29 20:13:04.055536 | orchestrator | 20:13:04.054 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-08-29 20:13:04.055539 | orchestrator | 20:13:04.054 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 20:13:04.055546 | orchestrator | 20:13:04.054 STDOUT terraform:  } 2025-08-29 20:13:04.055550 | orchestrator | 20:13:04.054 STDOUT terraform:  } 2025-08-29 20:13:04.055554 | orchestrator | 20:13:04.054 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-08-29 20:13:04.055558 | orchestrator | 20:13:04.054 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 20:13:04.055562 | orchestrator | 20:13:04.054 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 20:13:04.055566 | orchestrator | 20:13:04.054 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 20:13:04.055569 | orchestrator | 20:13:04.054 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 20:13:04.055573 | orchestrator | 20:13:04.054 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.055577 | orchestrator | 20:13:04.054 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 20:13:04.055581 | orchestrator | 20:13:04.055 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 20:13:04.055584 | orchestrator | 20:13:04.055 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 20:13:04.055588 | orchestrator | 20:13:04.055 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 20:13:04.055592 | orchestrator | 20:13:04.055 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.055596 | orchestrator | 20:13:04.055 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 20:13:04.055599 | orchestrator | 20:13:04.055 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 20:13:04.055603 | orchestrator | 20:13:04.055 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 20:13:04.055610 | orchestrator | 20:13:04.055 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 20:13:04.055613 | orchestrator | 20:13:04.055 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.055617 | orchestrator | 20:13:04.055 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 20:13:04.055621 | orchestrator | 20:13:04.055 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.055624 | orchestrator | 20:13:04.055 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055628 | orchestrator | 20:13:04.055 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 20:13:04.055632 | orchestrator | 20:13:04.055 STDOUT terraform:  } 2025-08-29 20:13:04.055636 | orchestrator | 20:13:04.055 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055640 | orchestrator | 20:13:04.055 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 20:13:04.055643 | orchestrator | 20:13:04.055 STDOUT terraform:  } 2025-08-29 20:13:04.055647 | orchestrator | 20:13:04.055 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055651 | orchestrator | 20:13:04.055 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 20:13:04.055655 | orchestrator | 20:13:04.055 STDOUT terraform:  } 2025-08-29 20:13:04.055658 | orchestrator | 20:13:04.055 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.055664 | orchestrator | 20:13:04.055 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 20:13:04.055668 | orchestrator | 20:13:04.055 STDOUT terraform:  } 2025-08-29 20:13:04.055672 | orchestrator | 20:13:04.055 STDOUT terraform:  + binding (known after apply) 2025-08-29 20:13:04.055675 | orchestrator | 20:13:04.055 STDOUT terraform:  + fixed_ip { 2025-08-29 20:13:04.055679 | orchestrator | 20:13:04.055 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-08-29 20:13:04.055683 | orchestrator | 20:13:04.055 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 20:13:04.055687 | orchestrator | 20:13:04.055 STDOUT terraform:  } 2025-08-29 20:13:04.055714 | orchestrator | 20:13:04.055 STDOUT terraform:  } 2025-08-29 20:13:04.055756 | orchestrator | 20:13:04.055 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-08-29 20:13:04.055837 | orchestrator | 20:13:04.055 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 20:13:04.055846 | orchestrator | 20:13:04.055 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 20:13:04.055953 | orchestrator | 20:13:04.055 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 20:13:04.055979 | orchestrator | 20:13:04.055 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 20:13:04.057306 | orchestrator | 20:13:04.055 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.057324 | orchestrator | 20:13:04.055 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 20:13:04.057329 | orchestrator | 20:13:04.055 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 20:13:04.057333 | orchestrator | 20:13:04.055 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 20:13:04.057348 | orchestrator | 20:13:04.056 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 20:13:04.057352 | orchestrator | 20:13:04.056 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.057356 | orchestrator | 20:13:04.056 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 20:13:04.057360 | orchestrator | 20:13:04.056 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 20:13:04.057380 | orchestrator | 20:13:04.056 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 20:13:04.057384 | orchestrator | 20:13:04.056 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 20:13:04.057390 | orchestrator | 20:13:04.056 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.057394 | orchestrator | 20:13:04.056 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 20:13:04.057398 | orchestrator | 20:13:04.056 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.057402 | orchestrator | 20:13:04.056 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.057406 | orchestrator | 20:13:04.056 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 20:13:04.057410 | orchestrator | 20:13:04.056 STDOUT terraform:  } 2025-08-29 20:13:04.057414 | orchestrator | 20:13:04.056 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.057418 | orchestrator | 20:13:04.056 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 20:13:04.057421 | orchestrator | 20:13:04.056 STDOUT terraform:  } 2025-08-29 20:13:04.057425 | orchestrator | 20:13:04.056 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.057429 | orchestrator | 20:13:04.056 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 20:13:04.057432 | orchestrator | 20:13:04.056 STDOUT terraform:  } 2025-08-29 20:13:04.057436 | orchestrator | 20:13:04.056 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 20:13:04.057440 | orchestrator | 20:13:04.056 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 20:13:04.057459 | orchestrator | 20:13:04.056 STDOUT terraform:  } 2025-08-29 20:13:04.057463 | orchestrator | 20:13:04.056 STDOUT terraform:  + binding (known after apply) 2025-08-29 20:13:04.057467 | orchestrator | 20:13:04.056 STDOUT terraform:  + fixed_ip { 2025-08-29 20:13:04.057471 | orchestrator | 20:13:04.056 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-08-29 20:13:04.057474 | orchestrator | 20:13:04.056 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 20:13:04.057478 | orchestrator | 20:13:04.056 STDOUT terraform:  } 2025-08-29 20:13:04.057482 | orchestrator | 20:13:04.056 STDOUT terraform:  } 2025-08-29 20:13:04.057486 | orchestrator | 20:13:04.056 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-08-29 20:13:04.057490 | orchestrator | 20:13:04.056 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-08-29 20:13:04.057494 | orchestrator | 20:13:04.056 STDOUT terraform:  + force_destroy = false 2025-08-29 20:13:04.057498 | orchestrator | 20:13:04.056 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.057505 | orchestrator | 20:13:04.056 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 20:13:04.057509 | orchestrator | 20:13:04.056 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.057513 | orchestrator | 20:13:04.056 STDOUT terraform:  + router_id = (known after apply) 2025-08-29 20:13:04.057516 | orchestrator | 20:13:04.056 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 20:13:04.057535 | orchestrator | 20:13:04.056 STDOUT terraform:  } 2025-08-29 20:13:04.057543 | orchestrator | 20:13:04.056 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-08-29 20:13:04.057547 | orchestrator | 20:13:04.056 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-08-29 20:13:04.057551 | orchestrator | 20:13:04.056 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 20:13:04.057555 | orchestrator | 20:13:04.056 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.057559 | orchestrator | 20:13:04.056 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 20:13:04.057564 | orchestrator | 20:13:04.056 STDOUT terraform:  + "nova", 2025-08-29 20:13:04.057568 | orchestrator | 20:13:04.056 STDOUT terraform:  ] 2025-08-29 20:13:04.057572 | orchestrator | 20:13:04.056 STDOUT terraform:  + distributed = (known after apply) 2025-08-29 20:13:04.057576 | orchestrator | 20:13:04.057 STDOUT terraform:  + enable_snat = (known after apply) 2025-08-29 20:13:04.057580 | orchestrator | 20:13:04.057 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-08-29 20:13:04.057586 | orchestrator | 20:13:04.057 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-08-29 20:13:04.057590 | orchestrator | 20:13:04.057 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.057594 | orchestrator | 20:13:04.057 STDOUT terraform:  + name = "testbed" 2025-08-29 20:13:04.057597 | orchestrator | 20:13:04.057 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.057620 | orchestrator | 20:13:04.057 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.057624 | orchestrator | 20:13:04.057 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-08-29 20:13:04.057628 | orchestrator | 20:13:04.057 STDOUT terraform:  } 2025-08-29 20:13:04.057631 | orchestrator | 20:13:04.057 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-08-29 20:13:04.057636 | orchestrator | 20:13:04.057 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-08-29 20:13:04.057640 | orchestrator | 20:13:04.057 STDOUT terraform:  + description = "ssh" 2025-08-29 20:13:04.057643 | orchestrator | 20:13:04.057 STDOUT terraform:  + direction = "ingress" 2025-08-29 20:13:04.057647 | orchestrator | 20:13:04.057 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 20:13:04.057651 | orchestrator | 20:13:04.057 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.057655 | orchestrator | 20:13:04.057 STDOUT terraform:  + port_range_max = 22 2025-08-29 20:13:04.057658 | orchestrator | 20:13:04.057 STDOUT terraform:  + port_range_min = 22 2025-08-29 20:13:04.057667 | orchestrator | 20:13:04.057 STDOUT terraform:  + protocol = "tcp" 2025-08-29 20:13:04.057671 | orchestrator | 20:13:04.057 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.057675 | orchestrator | 20:13:04.057 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 20:13:04.057695 | orchestrator | 20:13:04.057 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 20:13:04.057699 | orchestrator | 20:13:04.057 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 20:13:04.057704 | orchestrator | 20:13:04.057 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 20:13:04.057774 | orchestrator | 20:13:04.057 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.057780 | orchestrator | 20:13:04.057 STDOUT terraform:  } 2025-08-29 20:13:04.057802 | orchestrator | 20:13:04.057 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-08-29 20:13:04.057867 | orchestrator | 20:13:04.057 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-08-29 20:13:04.057886 | orchestrator | 20:13:04.057 STDOUT terraform:  + description = "wireguard" 2025-08-29 20:13:04.057916 | orchestrator | 20:13:04.057 STDOUT terraform:  + direction = "ingress" 2025-08-29 20:13:04.057949 | orchestrator | 20:13:04.057 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 20:13:04.057977 | orchestrator | 20:13:04.057 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.058000 | orchestrator | 20:13:04.057 STDOUT terraform:  + port_range_max = 51820 2025-08-29 20:13:04.058059 | orchestrator | 20:13:04.057 STDOUT terraform:  + port_range_min = 51820 2025-08-29 20:13:04.058088 | orchestrator | 20:13:04.058 STDOUT terraform:  + protocol = "udp" 2025-08-29 20:13:04.058129 | orchestrator | 20:13:04.058 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.058164 | orchestrator | 20:13:04.058 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 20:13:04.058203 | orchestrator | 20:13:04.058 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 20:13:04.058231 | orchestrator | 20:13:04.058 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 20:13:04.058275 | orchestrator | 20:13:04.058 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 20:13:04.058309 | orchestrator | 20:13:04.058 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.058316 | orchestrator | 20:13:04.058 STDOUT terraform:  } 2025-08-29 20:13:04.058364 | orchestrator | 20:13:04.058 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_ru 2025-08-29 20:13:04.058435 | orchestrator | 20:13:04.058 STDOUT terraform: le3 will be created 2025-08-29 20:13:04.058470 | orchestrator | 20:13:04.058 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-08-29 20:13:04.058512 | orchestrator | 20:13:04.058 STDOUT terraform:  + direction = "ingress" 2025-08-29 20:13:04.058519 | orchestrator | 20:13:04.058 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 20:13:04.058557 | orchestrator | 20:13:04.058 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.058593 | orchestrator | 20:13:04.058 STDOUT terraform:  + protocol = "tcp" 2025-08-29 20:13:04.058618 | orchestrator | 20:13:04.058 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.058655 | orchestrator | 20:13:04.058 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 20:13:04.058691 | orchestrator | 20:13:04.058 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 20:13:04.058726 | orchestrator | 20:13:04.058 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 20:13:04.058763 | orchestrator | 20:13:04.058 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 20:13:04.058799 | orchestrator | 20:13:04.058 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.058805 | orchestrator | 20:13:04.058 STDOUT terraform:  } 2025-08-29 20:13:04.058863 | orchestrator | 20:13:04.058 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-08-29 20:13:04.058916 | orchestrator | 20:13:04.058 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-08-29 20:13:04.058949 | orchestrator | 20:13:04.058 STDOUT terraform:  + direction = "ingress" 2025-08-29 20:13:04.058984 | orchestrator | 20:13:04.058 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 20:13:04.059010 | orchestrator | 20:13:04.058 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.059031 | orchestrator | 20:13:04.059 STDOUT terraform:  + protocol = "udp" 2025-08-29 20:13:04.059078 | orchestrator | 20:13:04.059 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.059115 | orchestrator | 20:13:04.059 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 20:13:04.059156 | orchestrator | 20:13:04.059 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 20:13:04.059187 | orchestrator | 20:13:04.059 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 20:13:04.059235 | orchestrator | 20:13:04.059 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 20:13:04.059257 | orchestrator | 20:13:04.059 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.059263 | orchestrator | 20:13:04.059 STDOUT terraform:  } 2025-08-29 20:13:04.059322 | orchestrator | 20:13:04.059 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-08-29 20:13:04.059390 | orchestrator | 20:13:04.059 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-08-29 20:13:04.059398 | orchestrator | 20:13:04.059 STDOUT terraform:  + direction = "ingress" 2025-08-29 20:13:04.059441 | orchestrator | 20:13:04.059 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 20:13:04.059496 | orchestrator | 20:13:04.059 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.059547 | orchestrator | 20:13:04.059 STDOUT terraform:  + protocol = "icmp" 2025-08-29 20:13:04.059581 | orchestrator | 20:13:04.059 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.059627 | orchestrator | 20:13:04.059 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 20:13:04.059653 | orchestrator | 20:13:04.059 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 20:13:04.059705 | orchestrator | 20:13:04.059 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 20:13:04.059727 | orchestrator | 20:13:04.059 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 20:13:04.059753 | orchestrator | 20:13:04.059 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.059759 | orchestrator | 20:13:04.059 STDOUT terraform:  } 2025-08-29 20:13:04.059815 | orchestrator | 20:13:04.059 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-08-29 20:13:04.059867 | orchestrator | 20:13:04.059 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-08-29 20:13:04.059895 | orchestrator | 20:13:04.059 STDOUT terraform:  + direction = "ingress" 2025-08-29 20:13:04.059918 | orchestrator | 20:13:04.059 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 20:13:04.059955 | orchestrator | 20:13:04.059 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.059980 | orchestrator | 20:13:04.059 STDOUT terraform:  + protocol = "tcp" 2025-08-29 20:13:04.060045 | orchestrator | 20:13:04.059 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.060088 | orchestrator | 20:13:04.060 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 20:13:04.060126 | orchestrator | 20:13:04.060 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 20:13:04.060153 | orchestrator | 20:13:04.060 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 20:13:04.060196 | orchestrator | 20:13:04.060 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 20:13:04.060255 | orchestrator | 20:13:04.060 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.060280 | orchestrator | 20:13:04.060 STDOUT terraform:  } 2025-08-29 20:13:04.060333 | orchestrator | 20:13:04.060 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-08-29 20:13:04.060384 | orchestrator | 20:13:04.060 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-08-29 20:13:04.060429 | orchestrator | 20:13:04.060 STDOUT terraform:  + direction = "ingress" 2025-08-29 20:13:04.060466 | orchestrator | 20:13:04.060 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 20:13:04.060503 | orchestrator | 20:13:04.060 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.060531 | orchestrator | 20:13:04.060 STDOUT terraform:  + protocol = "udp" 2025-08-29 20:13:04.060566 | orchestrator | 20:13:04.060 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.060602 | orchestrator | 20:13:04.060 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 20:13:04.060640 | orchestrator | 20:13:04.060 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 20:13:04.060673 | orchestrator | 20:13:04.060 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 20:13:04.060707 | orchestrator | 20:13:04.060 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 20:13:04.060744 | orchestrator | 20:13:04.060 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.060751 | orchestrator | 20:13:04.060 STDOUT terraform:  } 2025-08-29 20:13:04.060803 | orchestrator | 20:13:04.060 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-08-29 20:13:04.060855 | orchestrator | 20:13:04.060 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-08-29 20:13:04.060887 | orchestrator | 20:13:04.060 STDOUT terraform:  + direction = "ingress" 2025-08-29 20:13:04.060916 | orchestrator | 20:13:04.060 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 20:13:04.060954 | orchestrator | 20:13:04.060 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.060979 | orchestrator | 20:13:04.060 STDOUT terraform:  + protocol = "icmp" 2025-08-29 20:13:04.061015 | orchestrator | 20:13:04.060 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.061081 | orchestrator | 20:13:04.061 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 20:13:04.061117 | orchestrator | 20:13:04.061 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 20:13:04.061157 | orchestrator | 20:13:04.061 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 20:13:04.061188 | orchestrator | 20:13:04.061 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 20:13:04.061221 | orchestrator | 20:13:04.061 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.061227 | orchestrator | 20:13:04.061 STDOUT terraform:  } 2025-08-29 20:13:04.061281 | orchestrator | 20:13:04.061 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-08-29 20:13:04.061331 | orchestrator | 20:13:04.061 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-08-29 20:13:04.061359 | orchestrator | 20:13:04.061 STDOUT terraform:  + description = "vrrp" 2025-08-29 20:13:04.061386 | orchestrator | 20:13:04.061 STDOUT terraform:  + direction = "ingress" 2025-08-29 20:13:04.061411 | orchestrator | 20:13:04.061 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 20:13:04.061448 | orchestrator | 20:13:04.061 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.061473 | orchestrator | 20:13:04.061 STDOUT terraform:  + protocol = "112" 2025-08-29 20:13:04.061510 | orchestrator | 20:13:04.061 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.061546 | orchestrator | 20:13:04.061 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 20:13:04.061582 | orchestrator | 20:13:04.061 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 20:13:04.061613 | orchestrator | 20:13:04.061 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 20:13:04.061648 | orchestrator | 20:13:04.061 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 20:13:04.061685 | orchestrator | 20:13:04.061 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.061691 | orchestrator | 20:13:04.061 STDOUT terraform:  } 2025-08-29 20:13:04.061744 | orchestrator | 20:13:04.061 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-08-29 20:13:04.061793 | orchestrator | 20:13:04.061 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-08-29 20:13:04.061822 | orchestrator | 20:13:04.061 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.061854 | orchestrator | 20:13:04.061 STDOUT terraform:  + description = "management security group" 2025-08-29 20:13:04.061882 | orchestrator | 20:13:04.061 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.061910 | orchestrator | 20:13:04.061 STDOUT terraform:  + name = "testbed-management" 2025-08-29 20:13:04.061937 | orchestrator | 20:13:04.061 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.061965 | orchestrator | 20:13:04.061 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 20:13:04.061992 | orchestrator | 20:13:04.061 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.061999 | orchestrator | 20:13:04.061 STDOUT terraform:  } 2025-08-29 20:13:04.062096 | orchestrator | 20:13:04.061 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-08-29 20:13:04.062145 | orchestrator | 20:13:04.062 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-08-29 20:13:04.062172 | orchestrator | 20:13:04.062 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.062204 | orchestrator | 20:13:04.062 STDOUT terraform:  + description = "node security group" 2025-08-29 20:13:04.062229 | orchestrator | 20:13:04.062 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.062254 | orchestrator | 20:13:04.062 STDOUT terraform:  + name = "testbed-node" 2025-08-29 20:13:04.062282 | orchestrator | 20:13:04.062 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.062310 | orchestrator | 20:13:04.062 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 20:13:04.062340 | orchestrator | 20:13:04.062 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.062346 | orchestrator | 20:13:04.062 STDOUT terraform:  } 2025-08-29 20:13:04.062393 | orchestrator | 20:13:04.062 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-08-29 20:13:04.062439 | orchestrator | 20:13:04.062 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-08-29 20:13:04.062473 | orchestrator | 20:13:04.062 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 20:13:04.062504 | orchestrator | 20:13:04.062 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-08-29 20:13:04.062524 | orchestrator | 20:13:04.062 STDOUT terraform:  + dns_nameservers = [ 2025-08-29 20:13:04.062531 | orchestrator | 20:13:04.062 STDOUT terraform:  + "8.8.8.8", 2025-08-29 20:13:04.062549 | orchestrator | 20:13:04.062 STDOUT terraform:  + "9.9.9.9", 2025-08-29 20:13:04.062556 | orchestrator | 20:13:04.062 STDOUT terraform:  ] 2025-08-29 20:13:04.062578 | orchestrator | 20:13:04.062 STDOUT terraform:  + enable_dhcp = true 2025-08-29 20:13:04.062613 | orchestrator | 20:13:04.062 STDOUT terraform:  + gateway_ip = (known after apply) 2025-08-29 20:13:04.062640 | orchestrator | 20:13:04.062 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.062663 | orchestrator | 20:13:04.062 STDOUT terraform:  + ip_version = 4 2025-08-29 20:13:04.062689 | orchestrator | 20:13:04.062 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-08-29 20:13:04.062719 | orchestrator | 20:13:04.062 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-08-29 20:13:04.062756 | orchestrator | 20:13:04.062 STDOUT terraform:  + name = "subnet-testbed-management" 2025-08-29 20:13:04.062785 | orchestrator | 20:13:04.062 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 20:13:04.062805 | orchestrator | 20:13:04.062 STDOUT terraform:  + no_gateway = false 2025-08-29 20:13:04.062835 | orchestrator | 20:13:04.062 STDOUT terraform:  + region = (known after apply) 2025-08-29 20:13:04.062871 | orchestrator | 20:13:04.062 STDOUT terraform:  + service_types = (known after apply) 2025-08-29 20:13:04.062898 | orchestrator | 20:13:04.062 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 20:13:04.062916 | orchestrator | 20:13:04.062 STDOUT terraform:  + allocation_pool { 2025-08-29 20:13:04.062941 | orchestrator | 20:13:04.062 STDOUT terraform:  + end = "192.168.31.250" 2025-08-29 20:13:04.062964 | orchestrator | 20:13:04.062 STDOUT terraform:  + start = "192.168.31.200" 2025-08-29 20:13:04.062970 | orchestrator | 20:13:04.062 STDOUT terraform:  } 2025-08-29 20:13:04.062987 | orchestrator | 20:13:04.062 STDOUT terraform:  } 2025-08-29 20:13:04.063010 | orchestrator | 20:13:04.062 STDOUT terraform:  # terraform_data.image will be created 2025-08-29 20:13:04.063045 | orchestrator | 20:13:04.063 STDOUT terraform:  + resource "terraform_data" "image" { 2025-08-29 20:13:04.063074 | orchestrator | 20:13:04.063 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.063095 | orchestrator | 20:13:04.063 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 20:13:04.063117 | orchestrator | 20:13:04.063 STDOUT terraform:  + output = (known after apply) 2025-08-29 20:13:04.063123 | orchestrator | 20:13:04.063 STDOUT terraform:  } 2025-08-29 20:13:04.063156 | orchestrator | 20:13:04.063 STDOUT terraform:  # terraform_data.image_node will be created 2025-08-29 20:13:04.063184 | orchestrator | 20:13:04.063 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-08-29 20:13:04.063208 | orchestrator | 20:13:04.063 STDOUT terraform:  + id = (known after apply) 2025-08-29 20:13:04.063230 | orchestrator | 20:13:04.063 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 20:13:04.063259 | orchestrator | 20:13:04.063 STDOUT terraform:  + output = (known after apply) 2025-08-29 20:13:04.063266 | orchestrator | 20:13:04.063 STDOUT terraform:  } 2025-08-29 20:13:04.063296 | orchestrator | 20:13:04.063 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-08-29 20:13:04.063306 | orchestrator | 20:13:04.063 STDOUT terraform: Changes to Outputs: 2025-08-29 20:13:04.063329 | orchestrator | 20:13:04.063 STDOUT terraform:  + manager_address = (sensitive value) 2025-08-29 20:13:04.063354 | orchestrator | 20:13:04.063 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 20:13:04.245630 | orchestrator | 20:13:04.245 STDOUT terraform: terraform_data.image_node: Creating... 2025-08-29 20:13:04.246453 | orchestrator | 20:13:04.246 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=b64ce9f9-450a-58ea-6ea6-1dc3adebe0ae] 2025-08-29 20:13:04.248745 | orchestrator | 20:13:04.248 STDOUT terraform: terraform_data.image: Creating... 2025-08-29 20:13:04.250312 | orchestrator | 20:13:04.250 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=9ac4caa2-979c-5821-efb9-fe5440e09c6b] 2025-08-29 20:13:04.271209 | orchestrator | 20:13:04.270 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-08-29 20:13:04.271789 | orchestrator | 20:13:04.271 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-08-29 20:13:04.277935 | orchestrator | 20:13:04.277 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-08-29 20:13:04.278330 | orchestrator | 20:13:04.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-08-29 20:13:04.278915 | orchestrator | 20:13:04.278 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-08-29 20:13:04.279692 | orchestrator | 20:13:04.279 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-08-29 20:13:04.282871 | orchestrator | 20:13:04.282 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-08-29 20:13:04.283654 | orchestrator | 20:13:04.283 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-08-29 20:13:04.284701 | orchestrator | 20:13:04.284 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-08-29 20:13:04.299183 | orchestrator | 20:13:04.298 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-08-29 20:13:04.716437 | orchestrator | 20:13:04.715 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 20:13:04.723732 | orchestrator | 20:13:04.723 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-08-29 20:13:04.742054 | orchestrator | 20:13:04.741 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 20:13:04.748427 | orchestrator | 20:13:04.747 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-08-29 20:13:04.765989 | orchestrator | 20:13:04.765 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-08-29 20:13:04.771107 | orchestrator | 20:13:04.770 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-08-29 20:13:05.341670 | orchestrator | 20:13:05.341 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=8d63254d-9a55-4b59-9e24-0bda92e963ec] 2025-08-29 20:13:05.350378 | orchestrator | 20:13:05.350 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-08-29 20:13:07.961184 | orchestrator | 20:13:07.960 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=3da68947-c337-4052-9861-a1ec6021be59] 2025-08-29 20:13:07.966606 | orchestrator | 20:13:07.966 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-08-29 20:13:07.972150 | orchestrator | 20:13:07.971 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=8de48b33-02fa-44df-ab75-fb3adc163aaf] 2025-08-29 20:13:07.978089 | orchestrator | 20:13:07.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=0fdcfb5c-5644-43f4-9439-4c34089784df] 2025-08-29 20:13:07.979932 | orchestrator | 20:13:07.979 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-08-29 20:13:07.988400 | orchestrator | 20:13:07.988 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-08-29 20:13:07.993578 | orchestrator | 20:13:07.993 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=9a372554-c439-41ad-8970-95d88d0b4dbe] 2025-08-29 20:13:07.993643 | orchestrator | 20:13:07.993 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=87912232-aa7c-4262-871d-9bc5d73b0ac4] 2025-08-29 20:13:08.000957 | orchestrator | 20:13:08.000 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-08-29 20:13:08.001002 | orchestrator | 20:13:08.000 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-08-29 20:13:08.006800 | orchestrator | 20:13:08.006 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=51de580c-8abc-4940-b3c7-576b20a2ecb2] 2025-08-29 20:13:08.012407 | orchestrator | 20:13:08.012 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-08-29 20:13:08.039977 | orchestrator | 20:13:08.039 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=bf74f504-ac7d-4b49-a722-26f61d318d88] 2025-08-29 20:13:08.055736 | orchestrator | 20:13:08.055 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-08-29 20:13:08.059290 | orchestrator | 20:13:08.059 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=00f0f810f51d3db35654a1212895206808157cea] 2025-08-29 20:13:08.069463 | orchestrator | 20:13:08.069 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-08-29 20:13:08.078830 | orchestrator | 20:13:08.078 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=cf81ba63cea9fa6f439185ae5b805116350ce814] 2025-08-29 20:13:08.084382 | orchestrator | 20:13:08.084 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-08-29 20:13:08.101119 | orchestrator | 20:13:08.100 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=02349b33-ae7e-4f46-b237-ffaefc5b0042] 2025-08-29 20:13:08.329098 | orchestrator | 20:13:08.328 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=b39085cf-2099-4337-b75a-480912a54346] 2025-08-29 20:13:08.752482 | orchestrator | 20:13:08.752 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=d50da758-3434-48e7-b2ef-c53bd7d7b8a5] 2025-08-29 20:13:09.158598 | orchestrator | 20:13:09.158 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=73f5559f-7ad8-4190-af4f-43d5e7739711] 2025-08-29 20:13:09.166650 | orchestrator | 20:13:09.166 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-08-29 20:13:11.444700 | orchestrator | 20:13:11.444 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=14df01a6-19d6-409d-8aac-29053c3f8745] 2025-08-29 20:13:11.476998 | orchestrator | 20:13:11.476 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=85cdd01a-0b69-40c0-874a-0ae950f34a38] 2025-08-29 20:13:11.494372 | orchestrator | 20:13:11.493 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=14923614-9faf-46a7-928a-44af17f4ba91] 2025-08-29 20:13:11.498777 | orchestrator | 20:13:11.498 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=d6ce559c-8e77-4180-a88c-a928a51812f7] 2025-08-29 20:13:11.519078 | orchestrator | 20:13:11.518 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=40c948a9-46ff-474d-963f-02eb165645ed] 2025-08-29 20:13:11.535827 | orchestrator | 20:13:11.535 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=4a3e5fad-a271-4574-87a4-9e8d4d1d75c0] 2025-08-29 20:13:12.453154 | orchestrator | 20:13:12.452 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=2833190e-b915-478b-9350-156ed7b05287] 2025-08-29 20:13:12.460978 | orchestrator | 20:13:12.459 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-08-29 20:13:12.461074 | orchestrator | 20:13:12.459 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-08-29 20:13:12.461213 | orchestrator | 20:13:12.461 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-08-29 20:13:12.702494 | orchestrator | 20:13:12.701 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=6cc25252-8925-4dc5-8b6b-ac7eac290fd3] 2025-08-29 20:13:12.709023 | orchestrator | 20:13:12.708 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=8146e557-6178-4cbd-89d7-46bdcfb53d37] 2025-08-29 20:13:12.712095 | orchestrator | 20:13:12.711 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-08-29 20:13:12.714057 | orchestrator | 20:13:12.713 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-08-29 20:13:12.719136 | orchestrator | 20:13:12.718 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-08-29 20:13:12.719893 | orchestrator | 20:13:12.719 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-08-29 20:13:12.720875 | orchestrator | 20:13:12.720 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-08-29 20:13:12.721242 | orchestrator | 20:13:12.721 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-08-29 20:13:12.724028 | orchestrator | 20:13:12.723 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-08-29 20:13:12.727682 | orchestrator | 20:13:12.727 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-08-29 20:13:12.728445 | orchestrator | 20:13:12.728 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-08-29 20:13:12.907687 | orchestrator | 20:13:12.907 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=ad57ecd8-ccaa-4ed1-9e3f-9209fc23c4fa] 2025-08-29 20:13:12.916586 | orchestrator | 20:13:12.916 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-08-29 20:13:12.954189 | orchestrator | 20:13:12.953 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=084b6341-7557-4d2b-b657-6c6f3dd91112] 2025-08-29 20:13:12.968664 | orchestrator | 20:13:12.968 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-08-29 20:13:13.106982 | orchestrator | 20:13:13.106 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=e56c1a15-244e-47a3-91ff-579ffb1af560] 2025-08-29 20:13:13.117483 | orchestrator | 20:13:13.117 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-08-29 20:13:13.256520 | orchestrator | 20:13:13.256 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=e952cbf3-ab0c-4d5c-a8bb-bf312ccb921d] 2025-08-29 20:13:13.272494 | orchestrator | 20:13:13.272 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-08-29 20:13:13.317917 | orchestrator | 20:13:13.317 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=c84219b9-ba37-4668-99c7-e656f9067062] 2025-08-29 20:13:13.329608 | orchestrator | 20:13:13.329 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-08-29 20:13:13.695274 | orchestrator | 20:13:13.694 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=c1f00805-b609-40d8-a513-087b73e96d1e] 2025-08-29 20:13:13.707334 | orchestrator | 20:13:13.705 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=6b96d0f3-edf8-4099-b155-f71308ac49ed] 2025-08-29 20:13:13.707397 | orchestrator | 20:13:13.707 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-08-29 20:13:13.726286 | orchestrator | 20:13:13.726 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-08-29 20:13:13.879234 | orchestrator | 20:13:13.878 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=f6d75ff0-060d-4248-a909-1a890392cbdb] 2025-08-29 20:13:13.954150 | orchestrator | 20:13:13.953 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=fb5f6d9c-fc4f-4ad3-8b26-bb568b4aad94] 2025-08-29 20:13:14.187690 | orchestrator | 20:13:14.186 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=84fc9583-0723-4ef4-9e0e-628dd699d4a8] 2025-08-29 20:13:14.192222 | orchestrator | 20:13:14.191 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=e582518e-db95-49c4-930a-e4bb0f837e8d] 2025-08-29 20:13:14.320133 | orchestrator | 20:13:14.312 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=5e5f38cd-c5ed-4972-a5a2-cb025d231b82] 2025-08-29 20:13:14.322875 | orchestrator | 20:13:14.322 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=523993f6-ef0f-4bfd-bd38-cc28ee3c101c] 2025-08-29 20:13:14.553550 | orchestrator | 20:13:14.553 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=79c56ed9-4f4a-4648-a8ae-b0cb5fa6eb4d] 2025-08-29 20:13:14.631467 | orchestrator | 20:13:14.631 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=e6297e21-2179-4a03-bed1-592fb7779100] 2025-08-29 20:13:14.640717 | orchestrator | 20:13:14.640 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-08-29 20:13:14.677258 | orchestrator | 20:13:14.676 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=f491ed68-62f2-4da8-bc21-fe7660d0e4a5] 2025-08-29 20:13:14.711161 | orchestrator | 20:13:14.710 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-08-29 20:13:14.712553 | orchestrator | 20:13:14.712 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-08-29 20:13:14.718009 | orchestrator | 20:13:14.717 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-08-29 20:13:14.723452 | orchestrator | 20:13:14.723 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-08-29 20:13:14.735372 | orchestrator | 20:13:14.735 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-08-29 20:13:14.735799 | orchestrator | 20:13:14.735 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-08-29 20:13:15.176631 | orchestrator | 20:13:15.176 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=59341a1e-17fa-4201-b2df-effd43c6eba0] 2025-08-29 20:13:16.404924 | orchestrator | 20:13:16.404 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=63fe9493-ed2f-4a0d-a5d3-65f432fc7ee7] 2025-08-29 20:13:16.422366 | orchestrator | 20:13:16.421 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-08-29 20:13:16.422426 | orchestrator | 20:13:16.421 STDOUT terraform: local_file.inventory: Creating... 2025-08-29 20:13:16.423251 | orchestrator | 20:13:16.422 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-08-29 20:13:16.427708 | orchestrator | 20:13:16.427 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=a4c34420c941307fb06ceccdbbef588c72ffe804] 2025-08-29 20:13:16.428582 | orchestrator | 20:13:16.428 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=f1f8f3008ea2c8c0177f542be5947ef0394c735b] 2025-08-29 20:13:18.205608 | orchestrator | 20:13:18.205 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=63fe9493-ed2f-4a0d-a5d3-65f432fc7ee7] 2025-08-29 20:13:24.714120 | orchestrator | 20:13:24.713 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-08-29 20:13:24.714270 | orchestrator | 20:13:24.713 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-08-29 20:13:24.719327 | orchestrator | 20:13:24.718 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-08-29 20:13:24.728862 | orchestrator | 20:13:24.728 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-08-29 20:13:24.736868 | orchestrator | 20:13:24.736 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-08-29 20:13:24.739004 | orchestrator | 20:13:24.738 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-08-29 20:13:34.715131 | orchestrator | 20:13:34.714 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-08-29 20:13:34.715259 | orchestrator | 20:13:34.715 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-08-29 20:13:34.719378 | orchestrator | 20:13:34.719 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-08-29 20:13:34.729806 | orchestrator | 20:13:34.729 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-08-29 20:13:34.737175 | orchestrator | 20:13:34.736 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-08-29 20:13:34.739331 | orchestrator | 20:13:34.739 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-08-29 20:13:35.419618 | orchestrator | 20:13:35.419 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=b0e9775a-8523-49f8-b09e-0b18e3257171] 2025-08-29 20:13:44.715849 | orchestrator | 20:13:44.715 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-08-29 20:13:44.716028 | orchestrator | 20:13:44.715 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-08-29 20:13:44.730074 | orchestrator | 20:13:44.729 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-08-29 20:13:44.737152 | orchestrator | 20:13:44.737 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-08-29 20:13:44.740424 | orchestrator | 20:13:44.740 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-08-29 20:13:45.415636 | orchestrator | 20:13:45.415 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=f976fef8-ce7f-427b-a5d5-ae41bef5e245] 2025-08-29 20:13:45.567733 | orchestrator | 20:13:45.567 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=4439de2d-1e2d-44ec-95e2-8dd783db45de] 2025-08-29 20:13:46.080664 | orchestrator | 20:13:46.080 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=39e8778b-5221-4b8e-863a-d70c4405c16f] 2025-08-29 20:13:46.199416 | orchestrator | 20:13:46.199 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=6a2208a1-09bf-4f87-89d7-e708022c1681] 2025-08-29 20:13:46.238235 | orchestrator | 20:13:46.237 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=cb55ed49-9c45-433d-b094-238bc01113a3] 2025-08-29 20:13:46.260200 | orchestrator | 20:13:46.259 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-08-29 20:13:46.264521 | orchestrator | 20:13:46.264 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-08-29 20:13:46.266815 | orchestrator | 20:13:46.266 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-08-29 20:13:46.267349 | orchestrator | 20:13:46.267 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-08-29 20:13:46.270975 | orchestrator | 20:13:46.270 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-08-29 20:13:46.275016 | orchestrator | 20:13:46.274 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3658107427431183550] 2025-08-29 20:13:46.294309 | orchestrator | 20:13:46.294 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-08-29 20:13:46.296404 | orchestrator | 20:13:46.296 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-08-29 20:13:46.297921 | orchestrator | 20:13:46.297 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-08-29 20:13:46.298543 | orchestrator | 20:13:46.298 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-08-29 20:13:46.302707 | orchestrator | 20:13:46.302 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-08-29 20:13:46.305135 | orchestrator | 20:13:46.305 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-08-29 20:13:49.733401 | orchestrator | 20:13:49.732 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=39e8778b-5221-4b8e-863a-d70c4405c16f/9a372554-c439-41ad-8970-95d88d0b4dbe] 2025-08-29 20:13:49.735815 | orchestrator | 20:13:49.735 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=f976fef8-ce7f-427b-a5d5-ae41bef5e245/b39085cf-2099-4337-b75a-480912a54346] 2025-08-29 20:13:49.764432 | orchestrator | 20:13:49.763 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=f976fef8-ce7f-427b-a5d5-ae41bef5e245/8de48b33-02fa-44df-ab75-fb3adc163aaf] 2025-08-29 20:13:49.770001 | orchestrator | 20:13:49.769 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=cb55ed49-9c45-433d-b094-238bc01113a3/02349b33-ae7e-4f46-b237-ffaefc5b0042] 2025-08-29 20:13:49.774155 | orchestrator | 20:13:49.773 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=cb55ed49-9c45-433d-b094-238bc01113a3/0fdcfb5c-5644-43f4-9439-4c34089784df] 2025-08-29 20:13:49.783091 | orchestrator | 20:13:49.782 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=39e8778b-5221-4b8e-863a-d70c4405c16f/3da68947-c337-4052-9861-a1ec6021be59] 2025-08-29 20:13:55.882306 | orchestrator | 20:13:55.881 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=f976fef8-ce7f-427b-a5d5-ae41bef5e245/87912232-aa7c-4262-871d-9bc5d73b0ac4] 2025-08-29 20:13:55.899291 | orchestrator | 20:13:55.898 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=39e8778b-5221-4b8e-863a-d70c4405c16f/bf74f504-ac7d-4b49-a722-26f61d318d88] 2025-08-29 20:13:55.929038 | orchestrator | 20:13:55.928 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=cb55ed49-9c45-433d-b094-238bc01113a3/51de580c-8abc-4940-b3c7-576b20a2ecb2] 2025-08-29 20:13:56.311042 | orchestrator | 20:13:56.310 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-08-29 20:14:06.312190 | orchestrator | 20:14:06.311 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-08-29 20:14:06.682558 | orchestrator | 20:14:06.682 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=2fea784a-2840-495c-a089-4f86d0d91821] 2025-08-29 20:14:06.727843 | orchestrator | 20:14:06.727 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-08-29 20:14:06.728075 | orchestrator | 20:14:06.727 STDOUT terraform: Outputs: 2025-08-29 20:14:06.728276 | orchestrator | 20:14:06.727 STDOUT terraform: manager_address = 2025-08-29 20:14:06.728310 | orchestrator | 20:14:06.727 STDOUT terraform: private_key = 2025-08-29 20:14:07.147900 | orchestrator | ok: Runtime: 0:01:12.352327 2025-08-29 20:14:07.180183 | 2025-08-29 20:14:07.180300 | TASK [Fetch manager address] 2025-08-29 20:14:07.601717 | orchestrator | ok 2025-08-29 20:14:07.611586 | 2025-08-29 20:14:07.611710 | TASK [Set manager_host address] 2025-08-29 20:14:07.690597 | orchestrator | ok 2025-08-29 20:14:07.699066 | 2025-08-29 20:14:07.699178 | LOOP [Update ansible collections] 2025-08-29 20:14:12.587118 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 20:14:12.587775 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 20:14:12.587876 | orchestrator | Starting galaxy collection install process 2025-08-29 20:14:12.588109 | orchestrator | Process install dependency map 2025-08-29 20:14:12.588195 | orchestrator | Starting collection install process 2025-08-29 20:14:12.588263 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-08-29 20:14:12.588393 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-08-29 20:14:12.588468 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-08-29 20:14:12.588626 | orchestrator | ok: Item: commons Runtime: 0:00:04.324062 2025-08-29 20:14:13.589056 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 20:14:13.589263 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 20:14:13.589315 | orchestrator | Starting galaxy collection install process 2025-08-29 20:14:13.589357 | orchestrator | Process install dependency map 2025-08-29 20:14:13.589414 | orchestrator | Starting collection install process 2025-08-29 20:14:13.589470 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-08-29 20:14:13.589646 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-08-29 20:14:13.589699 | orchestrator | osism.services:999.0.0 was installed successfully 2025-08-29 20:14:13.589764 | orchestrator | ok: Item: services Runtime: 0:00:00.739935 2025-08-29 20:14:13.613032 | 2025-08-29 20:14:13.613219 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 20:14:24.203709 | orchestrator | ok 2025-08-29 20:14:24.214042 | 2025-08-29 20:14:24.214157 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 20:15:24.255656 | orchestrator | ok 2025-08-29 20:15:24.266014 | 2025-08-29 20:15:24.266156 | TASK [Fetch manager ssh hostkey] 2025-08-29 20:15:25.846073 | orchestrator | Output suppressed because no_log was given 2025-08-29 20:15:25.862696 | 2025-08-29 20:15:25.862912 | TASK [Get ssh keypair from terraform environment] 2025-08-29 20:15:26.401259 | orchestrator | ok: Runtime: 0:00:00.008589 2025-08-29 20:15:26.417936 | 2025-08-29 20:15:26.418109 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 20:15:26.466747 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-08-29 20:15:26.477474 | 2025-08-29 20:15:26.477619 | TASK [Run manager part 0] 2025-08-29 20:15:27.903546 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 20:15:28.041893 | orchestrator | 2025-08-29 20:15:28.041950 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-08-29 20:15:28.041962 | orchestrator | 2025-08-29 20:15:28.041983 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-08-29 20:15:29.749533 | orchestrator | ok: [testbed-manager] 2025-08-29 20:15:29.749577 | orchestrator | 2025-08-29 20:15:29.749599 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 20:15:29.749608 | orchestrator | 2025-08-29 20:15:29.749617 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 20:15:31.675304 | orchestrator | ok: [testbed-manager] 2025-08-29 20:15:31.675354 | orchestrator | 2025-08-29 20:15:31.675375 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 20:15:32.304337 | orchestrator | ok: [testbed-manager] 2025-08-29 20:15:32.304382 | orchestrator | 2025-08-29 20:15:32.304391 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 20:15:32.357895 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:15:32.357923 | orchestrator | 2025-08-29 20:15:32.357931 | orchestrator | TASK [Update package cache] **************************************************** 2025-08-29 20:15:32.387863 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:15:32.387875 | orchestrator | 2025-08-29 20:15:32.387881 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 20:15:32.414307 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:15:32.414316 | orchestrator | 2025-08-29 20:15:32.414321 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 20:15:32.438216 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:15:32.438226 | orchestrator | 2025-08-29 20:15:32.438230 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 20:15:32.462685 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:15:32.462695 | orchestrator | 2025-08-29 20:15:32.462699 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-08-29 20:15:32.486496 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:15:32.486504 | orchestrator | 2025-08-29 20:15:32.486509 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-08-29 20:15:32.510646 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:15:32.510671 | orchestrator | 2025-08-29 20:15:32.510676 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-08-29 20:15:33.228295 | orchestrator | changed: [testbed-manager] 2025-08-29 20:15:33.228331 | orchestrator | 2025-08-29 20:15:33.228337 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-08-29 20:17:57.267104 | orchestrator | changed: [testbed-manager] 2025-08-29 20:17:57.267153 | orchestrator | 2025-08-29 20:17:57.267164 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 20:19:10.243722 | orchestrator | changed: [testbed-manager] 2025-08-29 20:19:10.243818 | orchestrator | 2025-08-29 20:19:10.243836 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 20:19:33.104973 | orchestrator | changed: [testbed-manager] 2025-08-29 20:19:33.105017 | orchestrator | 2025-08-29 20:19:33.105027 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 20:19:41.215044 | orchestrator | changed: [testbed-manager] 2025-08-29 20:19:41.215112 | orchestrator | 2025-08-29 20:19:41.215121 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 20:19:41.264476 | orchestrator | ok: [testbed-manager] 2025-08-29 20:19:41.264515 | orchestrator | 2025-08-29 20:19:41.264524 | orchestrator | TASK [Get current user] ******************************************************** 2025-08-29 20:19:42.040874 | orchestrator | ok: [testbed-manager] 2025-08-29 20:19:42.040937 | orchestrator | 2025-08-29 20:19:42.040955 | orchestrator | TASK [Create venv directory] *************************************************** 2025-08-29 20:19:42.781362 | orchestrator | changed: [testbed-manager] 2025-08-29 20:19:42.781425 | orchestrator | 2025-08-29 20:19:42.781442 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-08-29 20:19:48.959950 | orchestrator | changed: [testbed-manager] 2025-08-29 20:19:48.960008 | orchestrator | 2025-08-29 20:19:48.960051 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-08-29 20:19:54.708021 | orchestrator | changed: [testbed-manager] 2025-08-29 20:19:54.708102 | orchestrator | 2025-08-29 20:19:54.708119 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-08-29 20:19:57.196001 | orchestrator | changed: [testbed-manager] 2025-08-29 20:19:57.196082 | orchestrator | 2025-08-29 20:19:57.196100 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-08-29 20:19:58.868743 | orchestrator | changed: [testbed-manager] 2025-08-29 20:19:58.868802 | orchestrator | 2025-08-29 20:19:58.868812 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-08-29 20:19:59.950909 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 20:19:59.951022 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 20:19:59.951052 | orchestrator | 2025-08-29 20:19:59.951065 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-08-29 20:19:59.994091 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 20:19:59.994146 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 20:19:59.994158 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 20:19:59.994171 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 20:20:05.113672 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 20:20:05.113743 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 20:20:05.113752 | orchestrator | 2025-08-29 20:20:05.113761 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-08-29 20:20:05.661750 | orchestrator | changed: [testbed-manager] 2025-08-29 20:20:05.661788 | orchestrator | 2025-08-29 20:20:05.661795 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-08-29 20:21:27.624997 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-08-29 20:21:27.625098 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-08-29 20:21:27.625116 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-08-29 20:21:27.625130 | orchestrator | 2025-08-29 20:21:27.625143 | orchestrator | TASK [Install local collections] *********************************************** 2025-08-29 20:21:29.873865 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-08-29 20:21:29.873900 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-08-29 20:21:29.873905 | orchestrator | 2025-08-29 20:21:29.873911 | orchestrator | PLAY [Create operator user] **************************************************** 2025-08-29 20:21:29.873916 | orchestrator | 2025-08-29 20:21:29.873920 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 20:21:31.259483 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:31.259520 | orchestrator | 2025-08-29 20:21:31.259529 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 20:21:31.308702 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:31.308745 | orchestrator | 2025-08-29 20:21:31.308754 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 20:21:31.379494 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:31.379675 | orchestrator | 2025-08-29 20:21:31.379693 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 20:21:32.148646 | orchestrator | changed: [testbed-manager] 2025-08-29 20:21:32.148728 | orchestrator | 2025-08-29 20:21:32.148743 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 20:21:32.855957 | orchestrator | changed: [testbed-manager] 2025-08-29 20:21:32.856143 | orchestrator | 2025-08-29 20:21:32.856161 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 20:21:34.169721 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-08-29 20:21:34.169967 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-08-29 20:21:34.170002 | orchestrator | 2025-08-29 20:21:34.170119 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 20:21:35.541698 | orchestrator | changed: [testbed-manager] 2025-08-29 20:21:35.541796 | orchestrator | 2025-08-29 20:21:35.541811 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 20:21:37.220118 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 20:21:37.220299 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-08-29 20:21:37.220346 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-08-29 20:21:37.220359 | orchestrator | 2025-08-29 20:21:37.220372 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 20:21:37.275587 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:21:37.275673 | orchestrator | 2025-08-29 20:21:37.275691 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 20:21:37.811545 | orchestrator | changed: [testbed-manager] 2025-08-29 20:21:37.811627 | orchestrator | 2025-08-29 20:21:37.811643 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 20:21:37.886657 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:21:37.886708 | orchestrator | 2025-08-29 20:21:37.886718 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 20:21:38.749095 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 20:21:38.749145 | orchestrator | changed: [testbed-manager] 2025-08-29 20:21:38.749153 | orchestrator | 2025-08-29 20:21:38.749160 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 20:21:38.785367 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:21:38.785534 | orchestrator | 2025-08-29 20:21:38.785555 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 20:21:38.823869 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:21:38.823914 | orchestrator | 2025-08-29 20:21:38.823922 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 20:21:38.852407 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:21:38.852455 | orchestrator | 2025-08-29 20:21:38.852466 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 20:21:38.899948 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:21:38.899992 | orchestrator | 2025-08-29 20:21:38.900001 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 20:21:39.571608 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:39.571687 | orchestrator | 2025-08-29 20:21:39.571701 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 20:21:39.571712 | orchestrator | 2025-08-29 20:21:39.571722 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 20:21:40.912727 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:40.912818 | orchestrator | 2025-08-29 20:21:40.912835 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-08-29 20:21:41.843807 | orchestrator | changed: [testbed-manager] 2025-08-29 20:21:41.843852 | orchestrator | 2025-08-29 20:21:41.843858 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:21:41.843864 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 20:21:41.843869 | orchestrator | 2025-08-29 20:21:42.220240 | orchestrator | ok: Runtime: 0:06:15.179080 2025-08-29 20:21:42.238445 | 2025-08-29 20:21:42.238588 | TASK [Point out that the log in on the manager is now possible] 2025-08-29 20:21:42.287034 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-08-29 20:21:42.296721 | 2025-08-29 20:21:42.296839 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 20:21:42.344088 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-08-29 20:21:42.353557 | 2025-08-29 20:21:42.353676 | TASK [Run manager part 1 + 2] 2025-08-29 20:21:43.257834 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 20:21:43.308588 | orchestrator | 2025-08-29 20:21:43.308660 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-08-29 20:21:43.308675 | orchestrator | 2025-08-29 20:21:43.308699 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 20:21:46.048500 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:46.048656 | orchestrator | 2025-08-29 20:21:46.048710 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 20:21:46.080149 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:21:46.080189 | orchestrator | 2025-08-29 20:21:46.080196 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 20:21:46.108338 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:46.108370 | orchestrator | 2025-08-29 20:21:46.108376 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 20:21:46.135135 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:46.135162 | orchestrator | 2025-08-29 20:21:46.135168 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 20:21:46.193084 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:46.193121 | orchestrator | 2025-08-29 20:21:46.193129 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 20:21:46.246610 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:46.246645 | orchestrator | 2025-08-29 20:21:46.246652 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 20:21:46.282626 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-08-29 20:21:46.282658 | orchestrator | 2025-08-29 20:21:46.282665 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 20:21:46.904838 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:46.904913 | orchestrator | 2025-08-29 20:21:46.904930 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 20:21:46.953754 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:21:46.953797 | orchestrator | 2025-08-29 20:21:46.953805 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 20:21:48.115911 | orchestrator | changed: [testbed-manager] 2025-08-29 20:21:48.115988 | orchestrator | 2025-08-29 20:21:48.116007 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 20:21:48.624031 | orchestrator | ok: [testbed-manager] 2025-08-29 20:21:48.624103 | orchestrator | 2025-08-29 20:21:48.624120 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 20:21:49.625655 | orchestrator | changed: [testbed-manager] 2025-08-29 20:21:49.625706 | orchestrator | 2025-08-29 20:21:49.625716 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 20:22:05.152229 | orchestrator | changed: [testbed-manager] 2025-08-29 20:22:05.152330 | orchestrator | 2025-08-29 20:22:05.152349 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 20:22:05.730611 | orchestrator | ok: [testbed-manager] 2025-08-29 20:22:05.730682 | orchestrator | 2025-08-29 20:22:05.730699 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 20:22:05.783533 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:22:05.783575 | orchestrator | 2025-08-29 20:22:05.783582 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-08-29 20:22:06.623879 | orchestrator | changed: [testbed-manager] 2025-08-29 20:22:06.624130 | orchestrator | 2025-08-29 20:22:06.624147 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-08-29 20:22:07.537044 | orchestrator | changed: [testbed-manager] 2025-08-29 20:22:07.537081 | orchestrator | 2025-08-29 20:22:07.537087 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-08-29 20:22:08.029143 | orchestrator | changed: [testbed-manager] 2025-08-29 20:22:08.029208 | orchestrator | 2025-08-29 20:22:08.029223 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-08-29 20:22:08.065936 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 20:22:08.066006 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 20:22:08.066051 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 20:22:08.066063 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 20:22:10.780795 | orchestrator | changed: [testbed-manager] 2025-08-29 20:22:10.780833 | orchestrator | 2025-08-29 20:22:10.780841 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-08-29 20:22:18.565973 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-08-29 20:22:18.566067 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-08-29 20:22:18.566083 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-08-29 20:22:18.566092 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-08-29 20:22:18.566107 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-08-29 20:22:18.566115 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-08-29 20:22:18.566124 | orchestrator | 2025-08-29 20:22:18.566133 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-08-29 20:22:19.566721 | orchestrator | changed: [testbed-manager] 2025-08-29 20:22:19.566930 | orchestrator | 2025-08-29 20:22:19.566951 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-08-29 20:22:19.606065 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:22:19.606135 | orchestrator | 2025-08-29 20:22:19.606150 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-08-29 20:22:22.606173 | orchestrator | changed: [testbed-manager] 2025-08-29 20:22:22.606257 | orchestrator | 2025-08-29 20:22:22.606272 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-08-29 20:22:22.647808 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:22:22.647875 | orchestrator | 2025-08-29 20:22:22.647890 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-08-29 20:23:54.585813 | orchestrator | changed: [testbed-manager] 2025-08-29 20:23:54.585904 | orchestrator | 2025-08-29 20:23:54.585924 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 20:23:55.645863 | orchestrator | ok: [testbed-manager] 2025-08-29 20:23:55.645902 | orchestrator | 2025-08-29 20:23:55.645909 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:23:55.645916 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-08-29 20:23:55.645921 | orchestrator | 2025-08-29 20:23:55.960245 | orchestrator | ok: Runtime: 0:02:13.095067 2025-08-29 20:23:55.979114 | 2025-08-29 20:23:55.979280 | TASK [Reboot manager] 2025-08-29 20:23:57.516700 | orchestrator | ok: Runtime: 0:00:00.931224 2025-08-29 20:23:57.534590 | 2025-08-29 20:23:57.534751 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 20:24:10.963348 | orchestrator | ok 2025-08-29 20:24:10.974044 | 2025-08-29 20:24:10.974172 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 20:25:11.022550 | orchestrator | ok 2025-08-29 20:25:11.032002 | 2025-08-29 20:25:11.032135 | TASK [Deploy manager + bootstrap nodes] 2025-08-29 20:25:13.275253 | orchestrator | 2025-08-29 20:25:13.275430 | orchestrator | # DEPLOY MANAGER 2025-08-29 20:25:13.275489 | orchestrator | 2025-08-29 20:25:13.275505 | orchestrator | + set -e 2025-08-29 20:25:13.275519 | orchestrator | + echo 2025-08-29 20:25:13.275534 | orchestrator | + echo '# DEPLOY MANAGER' 2025-08-29 20:25:13.275551 | orchestrator | + echo 2025-08-29 20:25:13.275611 | orchestrator | + cat /opt/manager-vars.sh 2025-08-29 20:25:13.278578 | orchestrator | export NUMBER_OF_NODES=6 2025-08-29 20:25:13.278604 | orchestrator | 2025-08-29 20:25:13.278616 | orchestrator | export CEPH_VERSION=reef 2025-08-29 20:25:13.278629 | orchestrator | export CONFIGURATION_VERSION=main 2025-08-29 20:25:13.278642 | orchestrator | export MANAGER_VERSION=9.2.0 2025-08-29 20:25:13.278663 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-08-29 20:25:13.278675 | orchestrator | 2025-08-29 20:25:13.278693 | orchestrator | export ARA=false 2025-08-29 20:25:13.278704 | orchestrator | export DEPLOY_MODE=manager 2025-08-29 20:25:13.278721 | orchestrator | export TEMPEST=false 2025-08-29 20:25:13.278733 | orchestrator | export IS_ZUUL=true 2025-08-29 20:25:13.278744 | orchestrator | 2025-08-29 20:25:13.278762 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-08-29 20:25:13.278774 | orchestrator | export EXTERNAL_API=false 2025-08-29 20:25:13.278784 | orchestrator | 2025-08-29 20:25:13.278795 | orchestrator | export IMAGE_USER=ubuntu 2025-08-29 20:25:13.278809 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-08-29 20:25:13.278820 | orchestrator | 2025-08-29 20:25:13.278831 | orchestrator | export CEPH_STACK=ceph-ansible 2025-08-29 20:25:13.278848 | orchestrator | 2025-08-29 20:25:13.278859 | orchestrator | + echo 2025-08-29 20:25:13.278871 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 20:25:13.279490 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 20:25:13.279508 | orchestrator | ++ INTERACTIVE=false 2025-08-29 20:25:13.279520 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 20:25:13.279532 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 20:25:13.279721 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 20:25:13.279736 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 20:25:13.279776 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 20:25:13.279789 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 20:25:13.279800 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 20:25:13.279811 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 20:25:13.279822 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 20:25:13.279833 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 20:25:13.279844 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 20:25:13.279856 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 20:25:13.279874 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 20:25:13.279890 | orchestrator | ++ export ARA=false 2025-08-29 20:25:13.279901 | orchestrator | ++ ARA=false 2025-08-29 20:25:13.279912 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 20:25:13.279923 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 20:25:13.279934 | orchestrator | ++ export TEMPEST=false 2025-08-29 20:25:13.279945 | orchestrator | ++ TEMPEST=false 2025-08-29 20:25:13.279955 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 20:25:13.279966 | orchestrator | ++ IS_ZUUL=true 2025-08-29 20:25:13.279977 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-08-29 20:25:13.279988 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-08-29 20:25:13.279999 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 20:25:13.280014 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 20:25:13.280026 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 20:25:13.280036 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 20:25:13.280048 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 20:25:13.280059 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 20:25:13.280070 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 20:25:13.280081 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 20:25:13.280095 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-08-29 20:25:13.333555 | orchestrator | + docker version 2025-08-29 20:25:13.606625 | orchestrator | Client: Docker Engine - Community 2025-08-29 20:25:13.606706 | orchestrator | Version: 27.5.1 2025-08-29 20:25:13.606721 | orchestrator | API version: 1.47 2025-08-29 20:25:13.606733 | orchestrator | Go version: go1.22.11 2025-08-29 20:25:13.606744 | orchestrator | Git commit: 9f9e405 2025-08-29 20:25:13.606756 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 20:25:13.606768 | orchestrator | OS/Arch: linux/amd64 2025-08-29 20:25:13.606779 | orchestrator | Context: default 2025-08-29 20:25:13.606790 | orchestrator | 2025-08-29 20:25:13.606801 | orchestrator | Server: Docker Engine - Community 2025-08-29 20:25:13.606812 | orchestrator | Engine: 2025-08-29 20:25:13.606824 | orchestrator | Version: 27.5.1 2025-08-29 20:25:13.606835 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-08-29 20:25:13.606876 | orchestrator | Go version: go1.22.11 2025-08-29 20:25:13.606888 | orchestrator | Git commit: 4c9b3b0 2025-08-29 20:25:13.606899 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 20:25:13.606910 | orchestrator | OS/Arch: linux/amd64 2025-08-29 20:25:13.606921 | orchestrator | Experimental: false 2025-08-29 20:25:13.606932 | orchestrator | containerd: 2025-08-29 20:25:13.606953 | orchestrator | Version: 1.7.27 2025-08-29 20:25:13.606965 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-08-29 20:25:13.606977 | orchestrator | runc: 2025-08-29 20:25:13.606988 | orchestrator | Version: 1.2.5 2025-08-29 20:25:13.606999 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-08-29 20:25:13.607010 | orchestrator | docker-init: 2025-08-29 20:25:13.607020 | orchestrator | Version: 0.19.0 2025-08-29 20:25:13.607032 | orchestrator | GitCommit: de40ad0 2025-08-29 20:25:13.610683 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-08-29 20:25:13.619091 | orchestrator | + set -e 2025-08-29 20:25:13.619109 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 20:25:13.619121 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 20:25:13.619132 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 20:25:13.619143 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 20:25:13.619154 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 20:25:13.619169 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 20:25:13.619181 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 20:25:13.619192 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 20:25:13.619203 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 20:25:13.619213 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 20:25:13.619224 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 20:25:13.619235 | orchestrator | ++ export ARA=false 2025-08-29 20:25:13.619247 | orchestrator | ++ ARA=false 2025-08-29 20:25:13.619393 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 20:25:13.619409 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 20:25:13.619420 | orchestrator | ++ export TEMPEST=false 2025-08-29 20:25:13.619431 | orchestrator | ++ TEMPEST=false 2025-08-29 20:25:13.619463 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 20:25:13.619474 | orchestrator | ++ IS_ZUUL=true 2025-08-29 20:25:13.619485 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-08-29 20:25:13.619496 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-08-29 20:25:13.619507 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 20:25:13.619518 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 20:25:13.619533 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 20:25:13.619545 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 20:25:13.619556 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 20:25:13.619567 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 20:25:13.619578 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 20:25:13.619589 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 20:25:13.619600 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 20:25:13.619611 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 20:25:13.619621 | orchestrator | ++ INTERACTIVE=false 2025-08-29 20:25:13.619636 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 20:25:13.619651 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 20:25:13.619873 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 20:25:13.619889 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-08-29 20:25:13.627313 | orchestrator | + set -e 2025-08-29 20:25:13.627332 | orchestrator | + VERSION=9.2.0 2025-08-29 20:25:13.627346 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 20:25:13.634607 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 20:25:13.634625 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-08-29 20:25:13.639425 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-08-29 20:25:13.643925 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-08-29 20:25:13.652224 | orchestrator | /opt/configuration ~ 2025-08-29 20:25:13.652275 | orchestrator | + set -e 2025-08-29 20:25:13.652291 | orchestrator | + pushd /opt/configuration 2025-08-29 20:25:13.652305 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 20:25:13.658806 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 20:25:13.660056 | orchestrator | ++ deactivate nondestructive 2025-08-29 20:25:13.660073 | orchestrator | ++ '[' -n '' ']' 2025-08-29 20:25:13.660088 | orchestrator | ++ '[' -n '' ']' 2025-08-29 20:25:13.660116 | orchestrator | ++ hash -r 2025-08-29 20:25:13.660127 | orchestrator | ++ '[' -n '' ']' 2025-08-29 20:25:13.660138 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 20:25:13.660149 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 20:25:13.660160 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 20:25:13.660172 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 20:25:13.660182 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 20:25:13.660198 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 20:25:13.660210 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 20:25:13.660222 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 20:25:13.660234 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 20:25:13.660245 | orchestrator | ++ export PATH 2025-08-29 20:25:13.660291 | orchestrator | ++ '[' -n '' ']' 2025-08-29 20:25:13.660304 | orchestrator | ++ '[' -z '' ']' 2025-08-29 20:25:13.660319 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 20:25:13.660330 | orchestrator | ++ PS1='(venv) ' 2025-08-29 20:25:13.660341 | orchestrator | ++ export PS1 2025-08-29 20:25:13.660352 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 20:25:13.660427 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 20:25:13.660465 | orchestrator | ++ hash -r 2025-08-29 20:25:13.660524 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-08-29 20:25:14.640001 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-08-29 20:25:14.640926 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2025-08-29 20:25:14.642233 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-08-29 20:25:14.643478 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-08-29 20:25:14.644701 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-08-29 20:25:14.654407 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-08-29 20:25:14.655928 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-08-29 20:25:14.657053 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2025-08-29 20:25:14.658317 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-08-29 20:25:14.687902 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.3) 2025-08-29 20:25:14.689530 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-08-29 20:25:14.691230 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-08-29 20:25:14.692567 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.8.3) 2025-08-29 20:25:14.696628 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-08-29 20:25:14.895327 | orchestrator | ++ which gilt 2025-08-29 20:25:14.898559 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-08-29 20:25:14.898600 | orchestrator | + /opt/venv/bin/gilt overlay 2025-08-29 20:25:15.109498 | orchestrator | osism.cfg-generics: 2025-08-29 20:25:15.271494 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-08-29 20:25:15.271590 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-08-29 20:25:15.271618 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-08-29 20:25:15.271714 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-08-29 20:25:15.953002 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-08-29 20:25:16.690317 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-08-29 20:25:16.690394 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-08-29 20:25:16.690410 | orchestrator | ~ 2025-08-29 20:25:16.690423 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 20:25:16.690436 | orchestrator | + deactivate 2025-08-29 20:25:16.690470 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 20:25:16.690482 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 20:25:16.690493 | orchestrator | + export PATH 2025-08-29 20:25:16.690504 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 20:25:16.690516 | orchestrator | + '[' -n '' ']' 2025-08-29 20:25:16.690529 | orchestrator | + hash -r 2025-08-29 20:25:16.690540 | orchestrator | + '[' -n '' ']' 2025-08-29 20:25:16.690551 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 20:25:16.690562 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 20:25:16.690573 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 20:25:16.690584 | orchestrator | + unset -f deactivate 2025-08-29 20:25:16.690596 | orchestrator | + popd 2025-08-29 20:25:16.690606 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-08-29 20:25:16.690618 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-08-29 20:25:16.690628 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 20:25:16.690639 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 20:25:16.690650 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-08-29 20:25:16.690662 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-08-29 20:25:16.690673 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 20:25:16.690684 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 20:25:16.690695 | orchestrator | ++ deactivate nondestructive 2025-08-29 20:25:16.690706 | orchestrator | ++ '[' -n '' ']' 2025-08-29 20:25:16.690717 | orchestrator | ++ '[' -n '' ']' 2025-08-29 20:25:16.690728 | orchestrator | ++ hash -r 2025-08-29 20:25:16.690738 | orchestrator | ++ '[' -n '' ']' 2025-08-29 20:25:16.690749 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 20:25:16.690760 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 20:25:16.690771 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 20:25:16.690782 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 20:25:16.690793 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 20:25:16.690804 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 20:25:16.690815 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 20:25:16.690826 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 20:25:16.690839 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 20:25:16.690873 | orchestrator | ++ export PATH 2025-08-29 20:25:16.690884 | orchestrator | ++ '[' -n '' ']' 2025-08-29 20:25:16.690895 | orchestrator | ++ '[' -z '' ']' 2025-08-29 20:25:16.690906 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 20:25:16.690917 | orchestrator | ++ PS1='(venv) ' 2025-08-29 20:25:16.690928 | orchestrator | ++ export PS1 2025-08-29 20:25:16.690939 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 20:25:16.690950 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 20:25:16.690961 | orchestrator | ++ hash -r 2025-08-29 20:25:16.690973 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-08-29 20:25:17.567516 | orchestrator | 2025-08-29 20:25:17.567645 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-08-29 20:25:17.567663 | orchestrator | 2025-08-29 20:25:17.567675 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 20:25:18.139891 | orchestrator | ok: [testbed-manager] 2025-08-29 20:25:18.139998 | orchestrator | 2025-08-29 20:25:18.140015 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 20:25:19.106629 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:19.106733 | orchestrator | 2025-08-29 20:25:19.106749 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-08-29 20:25:19.106762 | orchestrator | 2025-08-29 20:25:19.106774 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 20:25:21.239928 | orchestrator | ok: [testbed-manager] 2025-08-29 20:25:21.240033 | orchestrator | 2025-08-29 20:25:21.240048 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-08-29 20:25:21.293670 | orchestrator | ok: [testbed-manager] 2025-08-29 20:25:21.293734 | orchestrator | 2025-08-29 20:25:21.293750 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-08-29 20:25:21.735169 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:21.735244 | orchestrator | 2025-08-29 20:25:21.735262 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-08-29 20:25:21.767277 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:25:21.767308 | orchestrator | 2025-08-29 20:25:21.767320 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 20:25:22.096450 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:22.097240 | orchestrator | 2025-08-29 20:25:22.097269 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-08-29 20:25:22.151639 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:25:22.151695 | orchestrator | 2025-08-29 20:25:22.151710 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-08-29 20:25:22.477186 | orchestrator | ok: [testbed-manager] 2025-08-29 20:25:22.477277 | orchestrator | 2025-08-29 20:25:22.477291 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-08-29 20:25:22.583566 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:25:22.583637 | orchestrator | 2025-08-29 20:25:22.583653 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-08-29 20:25:22.583666 | orchestrator | 2025-08-29 20:25:22.583678 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 20:25:24.220198 | orchestrator | ok: [testbed-manager] 2025-08-29 20:25:24.220303 | orchestrator | 2025-08-29 20:25:24.220320 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-08-29 20:25:24.316097 | orchestrator | included: osism.services.traefik for testbed-manager 2025-08-29 20:25:24.316191 | orchestrator | 2025-08-29 20:25:24.316207 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-08-29 20:25:24.369369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-08-29 20:25:24.369429 | orchestrator | 2025-08-29 20:25:24.369443 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-08-29 20:25:25.440378 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-08-29 20:25:25.440519 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-08-29 20:25:25.440538 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-08-29 20:25:25.440550 | orchestrator | 2025-08-29 20:25:25.440563 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-08-29 20:25:27.196040 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-08-29 20:25:27.196154 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-08-29 20:25:27.196169 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-08-29 20:25:27.196182 | orchestrator | 2025-08-29 20:25:27.196196 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-08-29 20:25:27.813934 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 20:25:27.814066 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:27.814085 | orchestrator | 2025-08-29 20:25:27.814099 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-08-29 20:25:28.444713 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 20:25:28.444808 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:28.444823 | orchestrator | 2025-08-29 20:25:28.444837 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-08-29 20:25:28.502990 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:25:28.503028 | orchestrator | 2025-08-29 20:25:28.503041 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-08-29 20:25:28.855355 | orchestrator | ok: [testbed-manager] 2025-08-29 20:25:28.855445 | orchestrator | 2025-08-29 20:25:28.855458 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-08-29 20:25:28.930364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-08-29 20:25:28.930427 | orchestrator | 2025-08-29 20:25:28.930440 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-08-29 20:25:29.916269 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:29.916368 | orchestrator | 2025-08-29 20:25:29.916383 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-08-29 20:25:30.689920 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:30.690089 | orchestrator | 2025-08-29 20:25:30.690109 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-08-29 20:25:41.240193 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:41.240304 | orchestrator | 2025-08-29 20:25:41.240338 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-08-29 20:25:41.296625 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:25:41.296690 | orchestrator | 2025-08-29 20:25:41.296704 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-08-29 20:25:41.296717 | orchestrator | 2025-08-29 20:25:41.296729 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 20:25:43.053127 | orchestrator | ok: [testbed-manager] 2025-08-29 20:25:43.053209 | orchestrator | 2025-08-29 20:25:43.053225 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-08-29 20:25:43.163181 | orchestrator | included: osism.services.manager for testbed-manager 2025-08-29 20:25:43.163267 | orchestrator | 2025-08-29 20:25:43.163281 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-08-29 20:25:43.220200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 20:25:43.220300 | orchestrator | 2025-08-29 20:25:43.220316 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-08-29 20:25:45.622427 | orchestrator | ok: [testbed-manager] 2025-08-29 20:25:45.622588 | orchestrator | 2025-08-29 20:25:45.622606 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-08-29 20:25:45.673784 | orchestrator | ok: [testbed-manager] 2025-08-29 20:25:45.673830 | orchestrator | 2025-08-29 20:25:45.673842 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-08-29 20:25:45.789179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-08-29 20:25:45.789261 | orchestrator | 2025-08-29 20:25:45.789278 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-08-29 20:25:48.522392 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-08-29 20:25:48.522472 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-08-29 20:25:48.522478 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-08-29 20:25:48.522483 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-08-29 20:25:48.522488 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-08-29 20:25:48.522492 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-08-29 20:25:48.522496 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-08-29 20:25:48.522522 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-08-29 20:25:48.522527 | orchestrator | 2025-08-29 20:25:48.522534 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-08-29 20:25:49.129940 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:49.130064 | orchestrator | 2025-08-29 20:25:49.130083 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-08-29 20:25:49.726112 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:49.726193 | orchestrator | 2025-08-29 20:25:49.726203 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-08-29 20:25:49.799083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-08-29 20:25:49.799187 | orchestrator | 2025-08-29 20:25:49.799209 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-08-29 20:25:50.976103 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-08-29 20:25:50.976209 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-08-29 20:25:50.976224 | orchestrator | 2025-08-29 20:25:50.976238 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-08-29 20:25:51.607663 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:51.607756 | orchestrator | 2025-08-29 20:25:51.607770 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-08-29 20:25:51.664382 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:25:51.664426 | orchestrator | 2025-08-29 20:25:51.664438 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-08-29 20:25:51.733819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-08-29 20:25:51.733864 | orchestrator | 2025-08-29 20:25:51.733877 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-08-29 20:25:52.325767 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:52.325864 | orchestrator | 2025-08-29 20:25:52.325879 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-08-29 20:25:52.386775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-08-29 20:25:52.386840 | orchestrator | 2025-08-29 20:25:52.386854 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-08-29 20:25:53.719444 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 20:25:53.719595 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 20:25:53.719612 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:53.719626 | orchestrator | 2025-08-29 20:25:53.719639 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-08-29 20:25:54.320575 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:54.320675 | orchestrator | 2025-08-29 20:25:54.320691 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-08-29 20:25:54.381209 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:25:54.381296 | orchestrator | 2025-08-29 20:25:54.381313 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-08-29 20:25:54.473148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-08-29 20:25:54.473233 | orchestrator | 2025-08-29 20:25:54.473248 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-08-29 20:25:54.980237 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:54.980337 | orchestrator | 2025-08-29 20:25:54.980353 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-08-29 20:25:55.375809 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:55.375898 | orchestrator | 2025-08-29 20:25:55.375912 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-08-29 20:25:56.554257 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-08-29 20:25:56.554364 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-08-29 20:25:56.554378 | orchestrator | 2025-08-29 20:25:56.554417 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-08-29 20:25:57.155346 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:57.155435 | orchestrator | 2025-08-29 20:25:57.155449 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-08-29 20:25:57.563266 | orchestrator | ok: [testbed-manager] 2025-08-29 20:25:57.563362 | orchestrator | 2025-08-29 20:25:57.563379 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-08-29 20:25:57.911332 | orchestrator | changed: [testbed-manager] 2025-08-29 20:25:57.911488 | orchestrator | 2025-08-29 20:25:57.911508 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-08-29 20:25:57.958570 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:25:57.958641 | orchestrator | 2025-08-29 20:25:57.958651 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-08-29 20:25:58.027203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-08-29 20:25:58.027306 | orchestrator | 2025-08-29 20:25:58.027318 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-08-29 20:25:58.071820 | orchestrator | ok: [testbed-manager] 2025-08-29 20:25:58.071890 | orchestrator | 2025-08-29 20:25:58.071903 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-08-29 20:25:59.972177 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-08-29 20:25:59.972278 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-08-29 20:25:59.972293 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-08-29 20:25:59.972306 | orchestrator | 2025-08-29 20:25:59.972319 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-08-29 20:26:00.657157 | orchestrator | changed: [testbed-manager] 2025-08-29 20:26:00.657245 | orchestrator | 2025-08-29 20:26:00.657260 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-08-29 20:26:01.375107 | orchestrator | changed: [testbed-manager] 2025-08-29 20:26:01.375200 | orchestrator | 2025-08-29 20:26:01.375215 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-08-29 20:26:02.071629 | orchestrator | changed: [testbed-manager] 2025-08-29 20:26:02.071725 | orchestrator | 2025-08-29 20:26:02.071740 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-08-29 20:26:02.143267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-08-29 20:26:02.143330 | orchestrator | 2025-08-29 20:26:02.143342 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-08-29 20:26:02.181323 | orchestrator | ok: [testbed-manager] 2025-08-29 20:26:02.181381 | orchestrator | 2025-08-29 20:26:02.181395 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-08-29 20:26:02.879196 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-08-29 20:26:02.879319 | orchestrator | 2025-08-29 20:26:02.879336 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-08-29 20:26:02.969105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-08-29 20:26:02.969202 | orchestrator | 2025-08-29 20:26:02.969218 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-08-29 20:26:03.667996 | orchestrator | changed: [testbed-manager] 2025-08-29 20:26:03.668116 | orchestrator | 2025-08-29 20:26:03.668134 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-08-29 20:26:04.237386 | orchestrator | ok: [testbed-manager] 2025-08-29 20:26:04.237478 | orchestrator | 2025-08-29 20:26:04.237493 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-08-29 20:26:04.293521 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:26:04.293663 | orchestrator | 2025-08-29 20:26:04.293682 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-08-29 20:26:04.355774 | orchestrator | ok: [testbed-manager] 2025-08-29 20:26:04.355864 | orchestrator | 2025-08-29 20:26:04.355878 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-08-29 20:26:05.143551 | orchestrator | changed: [testbed-manager] 2025-08-29 20:26:05.143641 | orchestrator | 2025-08-29 20:26:05.143652 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-08-29 20:27:10.289775 | orchestrator | changed: [testbed-manager] 2025-08-29 20:27:10.289874 | orchestrator | 2025-08-29 20:27:10.289890 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-08-29 20:27:11.189561 | orchestrator | ok: [testbed-manager] 2025-08-29 20:27:11.189703 | orchestrator | 2025-08-29 20:27:11.189721 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-08-29 20:27:11.242518 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:27:11.242616 | orchestrator | 2025-08-29 20:27:11.242632 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-08-29 20:27:13.556995 | orchestrator | changed: [testbed-manager] 2025-08-29 20:27:13.557097 | orchestrator | 2025-08-29 20:27:13.557114 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-08-29 20:27:13.607622 | orchestrator | ok: [testbed-manager] 2025-08-29 20:27:13.607768 | orchestrator | 2025-08-29 20:27:13.607785 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 20:27:13.607798 | orchestrator | 2025-08-29 20:27:13.607809 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-08-29 20:27:13.650218 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:27:13.650249 | orchestrator | 2025-08-29 20:27:13.650261 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-08-29 20:28:13.699779 | orchestrator | Pausing for 60 seconds 2025-08-29 20:28:13.699881 | orchestrator | changed: [testbed-manager] 2025-08-29 20:28:13.699899 | orchestrator | 2025-08-29 20:28:13.699912 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-08-29 20:28:18.191781 | orchestrator | changed: [testbed-manager] 2025-08-29 20:28:18.191868 | orchestrator | 2025-08-29 20:28:18.191884 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-08-29 20:28:59.725503 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-08-29 20:28:59.725601 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-08-29 20:28:59.725610 | orchestrator | changed: [testbed-manager] 2025-08-29 20:28:59.725616 | orchestrator | 2025-08-29 20:28:59.725622 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-08-29 20:29:08.727339 | orchestrator | changed: [testbed-manager] 2025-08-29 20:29:08.727427 | orchestrator | 2025-08-29 20:29:08.727445 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-08-29 20:29:08.806963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-08-29 20:29:08.807030 | orchestrator | 2025-08-29 20:29:08.807043 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 20:29:08.807054 | orchestrator | 2025-08-29 20:29:08.807064 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-08-29 20:29:08.850512 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:29:08.850587 | orchestrator | 2025-08-29 20:29:08.850607 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:29:08.850624 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-08-29 20:29:08.850645 | orchestrator | 2025-08-29 20:29:08.914477 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 20:29:08.914543 | orchestrator | + deactivate 2025-08-29 20:29:08.914557 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 20:29:08.914570 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 20:29:08.914612 | orchestrator | + export PATH 2025-08-29 20:29:08.914626 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 20:29:08.914638 | orchestrator | + '[' -n '' ']' 2025-08-29 20:29:08.914650 | orchestrator | + hash -r 2025-08-29 20:29:08.914661 | orchestrator | + '[' -n '' ']' 2025-08-29 20:29:08.914672 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 20:29:08.914683 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 20:29:08.914694 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 20:29:08.914705 | orchestrator | + unset -f deactivate 2025-08-29 20:29:08.914803 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-08-29 20:29:08.922164 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 20:29:08.922193 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 20:29:08.922205 | orchestrator | + local max_attempts=60 2025-08-29 20:29:08.922216 | orchestrator | + local name=ceph-ansible 2025-08-29 20:29:08.922227 | orchestrator | + local attempt_num=1 2025-08-29 20:29:08.923086 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:29:08.955005 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:29:08.955050 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 20:29:08.955070 | orchestrator | + local max_attempts=60 2025-08-29 20:29:08.955090 | orchestrator | + local name=kolla-ansible 2025-08-29 20:29:08.955141 | orchestrator | + local attempt_num=1 2025-08-29 20:29:08.955716 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 20:29:08.987235 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:29:08.987290 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 20:29:08.987304 | orchestrator | + local max_attempts=60 2025-08-29 20:29:08.987315 | orchestrator | + local name=osism-ansible 2025-08-29 20:29:08.987326 | orchestrator | + local attempt_num=1 2025-08-29 20:29:08.987733 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 20:29:09.015308 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:29:09.015362 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 20:29:09.015376 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 20:29:09.590276 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-08-29 20:29:09.788121 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-08-29 20:29:09.788194 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-08-29 20:29:09.788208 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-08-29 20:29:09.788219 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-08-29 20:29:09.788231 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-08-29 20:29:09.788242 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-08-29 20:29:09.788253 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-08-29 20:29:09.788264 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-08-29 20:29:09.788275 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-08-29 20:29:09.788286 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-08-29 20:29:09.788297 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-08-29 20:29:09.788307 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-08-29 20:29:09.788318 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-08-29 20:29:09.788329 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-08-29 20:29:09.788340 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-08-29 20:29:09.788377 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-08-29 20:29:09.794886 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 20:29:09.845641 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 20:29:09.845696 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-08-29 20:29:09.850265 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-08-29 20:29:21.722288 | orchestrator | 2025-08-29 20:29:21 | INFO  | Task 68a692a9-bf00-4d2a-b265-8ba16852007b (resolvconf) was prepared for execution. 2025-08-29 20:29:21.722338 | orchestrator | 2025-08-29 20:29:21 | INFO  | It takes a moment until task 68a692a9-bf00-4d2a-b265-8ba16852007b (resolvconf) has been started and output is visible here. 2025-08-29 20:29:34.378107 | orchestrator | 2025-08-29 20:29:34.378214 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-08-29 20:29:34.378232 | orchestrator | 2025-08-29 20:29:34.378244 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 20:29:34.378256 | orchestrator | Friday 29 August 2025 20:29:24 +0000 (0:00:00.129) 0:00:00.129 ********* 2025-08-29 20:29:34.378268 | orchestrator | ok: [testbed-manager] 2025-08-29 20:29:34.378280 | orchestrator | 2025-08-29 20:29:34.378291 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 20:29:34.378303 | orchestrator | Friday 29 August 2025 20:29:28 +0000 (0:00:03.360) 0:00:03.490 ********* 2025-08-29 20:29:34.378314 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:29:34.378326 | orchestrator | 2025-08-29 20:29:34.378337 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 20:29:34.378348 | orchestrator | Friday 29 August 2025 20:29:28 +0000 (0:00:00.044) 0:00:03.534 ********* 2025-08-29 20:29:34.378359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-08-29 20:29:34.378371 | orchestrator | 2025-08-29 20:29:34.378383 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 20:29:34.378394 | orchestrator | Friday 29 August 2025 20:29:28 +0000 (0:00:00.058) 0:00:03.593 ********* 2025-08-29 20:29:34.378406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 20:29:34.378417 | orchestrator | 2025-08-29 20:29:34.378428 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 20:29:34.378440 | orchestrator | Friday 29 August 2025 20:29:28 +0000 (0:00:00.060) 0:00:03.654 ********* 2025-08-29 20:29:34.378451 | orchestrator | ok: [testbed-manager] 2025-08-29 20:29:34.378462 | orchestrator | 2025-08-29 20:29:34.378473 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 20:29:34.378484 | orchestrator | Friday 29 August 2025 20:29:29 +0000 (0:00:00.843) 0:00:04.498 ********* 2025-08-29 20:29:34.378496 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:29:34.378507 | orchestrator | 2025-08-29 20:29:34.378519 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 20:29:34.378530 | orchestrator | Friday 29 August 2025 20:29:29 +0000 (0:00:00.054) 0:00:04.553 ********* 2025-08-29 20:29:34.378541 | orchestrator | ok: [testbed-manager] 2025-08-29 20:29:34.378552 | orchestrator | 2025-08-29 20:29:34.378563 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 20:29:34.378575 | orchestrator | Friday 29 August 2025 20:29:29 +0000 (0:00:00.404) 0:00:04.957 ********* 2025-08-29 20:29:34.378586 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:29:34.378646 | orchestrator | 2025-08-29 20:29:34.378659 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 20:29:34.378695 | orchestrator | Friday 29 August 2025 20:29:29 +0000 (0:00:00.064) 0:00:05.022 ********* 2025-08-29 20:29:34.378707 | orchestrator | changed: [testbed-manager] 2025-08-29 20:29:34.378718 | orchestrator | 2025-08-29 20:29:34.378729 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 20:29:34.378740 | orchestrator | Friday 29 August 2025 20:29:30 +0000 (0:00:00.449) 0:00:05.472 ********* 2025-08-29 20:29:34.378751 | orchestrator | changed: [testbed-manager] 2025-08-29 20:29:34.378761 | orchestrator | 2025-08-29 20:29:34.378772 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 20:29:34.378783 | orchestrator | Friday 29 August 2025 20:29:31 +0000 (0:00:00.939) 0:00:06.411 ********* 2025-08-29 20:29:34.378794 | orchestrator | ok: [testbed-manager] 2025-08-29 20:29:34.378826 | orchestrator | 2025-08-29 20:29:34.378837 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 20:29:34.378858 | orchestrator | Friday 29 August 2025 20:29:33 +0000 (0:00:01.870) 0:00:08.281 ********* 2025-08-29 20:29:34.378870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-08-29 20:29:34.378881 | orchestrator | 2025-08-29 20:29:34.378892 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 20:29:34.378903 | orchestrator | Friday 29 August 2025 20:29:33 +0000 (0:00:00.082) 0:00:08.364 ********* 2025-08-29 20:29:34.378914 | orchestrator | changed: [testbed-manager] 2025-08-29 20:29:34.378925 | orchestrator | 2025-08-29 20:29:34.378936 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:29:34.378947 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 20:29:34.378958 | orchestrator | 2025-08-29 20:29:34.378970 | orchestrator | 2025-08-29 20:29:34.378980 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:29:34.378991 | orchestrator | Friday 29 August 2025 20:29:34 +0000 (0:00:01.008) 0:00:09.373 ********* 2025-08-29 20:29:34.379002 | orchestrator | =============================================================================== 2025-08-29 20:29:34.379013 | orchestrator | Gathering Facts --------------------------------------------------------- 3.36s 2025-08-29 20:29:34.379024 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.87s 2025-08-29 20:29:34.379035 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.01s 2025-08-29 20:29:34.379045 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.94s 2025-08-29 20:29:34.379056 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.84s 2025-08-29 20:29:34.379067 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.45s 2025-08-29 20:29:34.379096 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.40s 2025-08-29 20:29:34.379108 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-08-29 20:29:34.379119 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2025-08-29 20:29:34.379130 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2025-08-29 20:29:34.379141 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.06s 2025-08-29 20:29:34.379152 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-08-29 20:29:34.379163 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.04s 2025-08-29 20:29:34.602222 | orchestrator | + osism apply sshconfig 2025-08-29 20:29:46.423383 | orchestrator | 2025-08-29 20:29:46 | INFO  | Task a7c1a51a-c832-47b3-a050-5087cb5f9e7b (sshconfig) was prepared for execution. 2025-08-29 20:29:46.423482 | orchestrator | 2025-08-29 20:29:46 | INFO  | It takes a moment until task a7c1a51a-c832-47b3-a050-5087cb5f9e7b (sshconfig) has been started and output is visible here. 2025-08-29 20:29:57.661640 | orchestrator | 2025-08-29 20:29:57.661763 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-08-29 20:29:57.661781 | orchestrator | 2025-08-29 20:29:57.661793 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-08-29 20:29:57.661805 | orchestrator | Friday 29 August 2025 20:29:50 +0000 (0:00:00.165) 0:00:00.165 ********* 2025-08-29 20:29:57.661817 | orchestrator | ok: [testbed-manager] 2025-08-29 20:29:57.661875 | orchestrator | 2025-08-29 20:29:57.661886 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-08-29 20:29:57.661898 | orchestrator | Friday 29 August 2025 20:29:50 +0000 (0:00:00.539) 0:00:00.705 ********* 2025-08-29 20:29:57.661909 | orchestrator | changed: [testbed-manager] 2025-08-29 20:29:57.661921 | orchestrator | 2025-08-29 20:29:57.661932 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-08-29 20:29:57.661943 | orchestrator | Friday 29 August 2025 20:29:51 +0000 (0:00:00.488) 0:00:01.193 ********* 2025-08-29 20:29:57.661955 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-08-29 20:29:57.661966 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-08-29 20:29:57.661977 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-08-29 20:29:57.661989 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-08-29 20:29:57.662000 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-08-29 20:29:57.662011 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-08-29 20:29:57.662076 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-08-29 20:29:57.662088 | orchestrator | 2025-08-29 20:29:57.662099 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-08-29 20:29:57.662110 | orchestrator | Friday 29 August 2025 20:29:56 +0000 (0:00:05.525) 0:00:06.719 ********* 2025-08-29 20:29:57.662141 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:29:57.662153 | orchestrator | 2025-08-29 20:29:57.662165 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-08-29 20:29:57.662176 | orchestrator | Friday 29 August 2025 20:29:56 +0000 (0:00:00.060) 0:00:06.780 ********* 2025-08-29 20:29:57.662189 | orchestrator | changed: [testbed-manager] 2025-08-29 20:29:57.662201 | orchestrator | 2025-08-29 20:29:57.662213 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:29:57.662227 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:29:57.662240 | orchestrator | 2025-08-29 20:29:57.662252 | orchestrator | 2025-08-29 20:29:57.662266 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:29:57.662279 | orchestrator | Friday 29 August 2025 20:29:57 +0000 (0:00:00.582) 0:00:07.362 ********* 2025-08-29 20:29:57.662290 | orchestrator | =============================================================================== 2025-08-29 20:29:57.662301 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.53s 2025-08-29 20:29:57.662311 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-08-29 20:29:57.662322 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2025-08-29 20:29:57.662333 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-08-29 20:29:57.662344 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-08-29 20:29:57.921131 | orchestrator | + osism apply known-hosts 2025-08-29 20:30:09.804503 | orchestrator | 2025-08-29 20:30:09 | INFO  | Task 426c079c-d28d-4c3e-b99e-082772eb9067 (known-hosts) was prepared for execution. 2025-08-29 20:30:09.804620 | orchestrator | 2025-08-29 20:30:09 | INFO  | It takes a moment until task 426c079c-d28d-4c3e-b99e-082772eb9067 (known-hosts) has been started and output is visible here. 2025-08-29 20:30:26.294537 | orchestrator | 2025-08-29 20:30:26.294656 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-08-29 20:30:26.294674 | orchestrator | 2025-08-29 20:30:26.294687 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-08-29 20:30:26.294699 | orchestrator | Friday 29 August 2025 20:30:13 +0000 (0:00:00.161) 0:00:00.161 ********* 2025-08-29 20:30:26.294711 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 20:30:26.294722 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 20:30:26.294734 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 20:30:26.294745 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 20:30:26.294756 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 20:30:26.294767 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 20:30:26.294778 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 20:30:26.294789 | orchestrator | 2025-08-29 20:30:26.294800 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-08-29 20:30:26.294812 | orchestrator | Friday 29 August 2025 20:30:18 +0000 (0:00:05.573) 0:00:05.735 ********* 2025-08-29 20:30:26.294825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 20:30:26.294838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 20:30:26.294849 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 20:30:26.294948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 20:30:26.294961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 20:30:26.294972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 20:30:26.294983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 20:30:26.294994 | orchestrator | 2025-08-29 20:30:26.295006 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:26.295017 | orchestrator | Friday 29 August 2025 20:30:19 +0000 (0:00:00.182) 0:00:05.917 ********* 2025-08-29 20:30:26.295028 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHY+lRNtPcjJ5WS+oMNvjsasMVV2WA2UXTjdGPrWXLpq) 2025-08-29 20:30:26.295120 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4ill8yHUDgIECCDb37wqjLEPGi3EN84tX9eE8xqhcoNk3jx30IwKT5/jN6josmjJcCHY/ZehR47sC9mlPwAaJkd6xzmH/bEMiIqGOD9FthvkHqemVSfImj9132uKq2nApEDXQsr5Nk6jFfMQHQAiOzkRJvXznAWlPSzuPwAkfmocn+QF1CUIRLVeDsEHMEsdS5Q41z58nSUrKk8amvNaa9+SG4w/hFoWm5d2cFHzJOJT8/v7LfoDDML3gFWSZ8PTyZKl+5+znhrnRyV/fs01zdza83eMftO5aSzT+DqpTLx9fKcCoZrrEqx1ZTHVVKojTKfk4g2jKH7Rl5MxRYA6VVZOVqMKjnXA5ua+n341y5XvEP30aQY8Z6vOPFe8R2/TMHUQdyz5pRpRcIDzoYvARNnYAkLdltV7ueUBYBJ3JDDOB6cng8OUoZ0miD7OuoUzG614Twa2egoqIXV+yRlbmC2tQ5rdfX8mng/7G4f4JKcHU1JPickxjhkNfbkOgB/s=) 2025-08-29 20:30:26.295149 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLw0WsWH1bSjLSvoXNC4ZPObPR2MtYWJsE/yz3DIiYdryBRwdGjILzz14u9/9CBOxu4VsBFT0wk/T0W5J6PFg1M=) 2025-08-29 20:30:26.295201 | orchestrator | 2025-08-29 20:30:26.295224 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:26.295245 | orchestrator | Friday 29 August 2025 20:30:20 +0000 (0:00:01.120) 0:00:07.037 ********* 2025-08-29 20:30:26.295279 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGGo1K/M78ePWBdgwhGr0krxPXYrAaQ4A9wncsmfknvKvI06PayT7XLDCRNP1tU3lHe0aL8DVh4DE7RF5UGjZgIdrQi7T7adUm8Dh/ZLtbtCm2kjqVcegtVB2839eS6NG8CT6ozcVuiPg1N7GTZige3skahmvUBzGoTktB0tVSKn3m4AAvwfJLC2Es+D4jjzOos/A4HcGmQD0Esj9WrrTxnpEOtZzCA9EbnlLosVWAMfhQja+2iQaxTjqvGXzFmgo8rT+BCp/mWTgnBuMPXtNqoijdEMfsLYqNinevwYQVtXCRoTEGvdSbGPrjQr/xKZWoFqBkUDue7yLwn1M6QQ0wt2UDCcSqcad/VyTK7mvp5DdF+acI4ULAWTxIe8Nq820Al1I9+isoaosL5lfi8GvzCY+lsb4rnBMeLCXhHNxOa6hvzqG1xnVKcFzl2ofRnmFcZ3EQq+68HQbvG5NPkqlUpv2U0liEXZYEv5CijVuez0fHNP5YOyf9EkwEEnWHOG0=) 2025-08-29 20:30:26.295293 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEckS/poL1e0ULB05yliI6RLilFHDOuVrBE6DSiMj9eX//OlF17dr8d6TInPu63GZDeogjWnb98o+BL84ve9SBg=) 2025-08-29 20:30:26.295306 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKMuPc7C6/Z3KzUNXZEyk9FJI6HKzhLC8Qx4vnIAAG4O) 2025-08-29 20:30:26.295318 | orchestrator | 2025-08-29 20:30:26.295331 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:26.295343 | orchestrator | Friday 29 August 2025 20:30:21 +0000 (0:00:01.027) 0:00:08.064 ********* 2025-08-29 20:30:26.295356 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGIPpK2mkGbUzlTanj3zrbygoImi+DBWBN4FTSqdbbYt8NkChdqMxk4iEZgmPAsdpZZ3pH4fE6hkIN1YlAe9t+8=) 2025-08-29 20:30:26.295368 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILhhT2AMXVc/13xBwvLqMvQDYbrgwgvkrjQJpqrYg3wr) 2025-08-29 20:30:26.295382 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYzLi2CGD9FIhUMHFDMZl5bKpDcn5wtpsu4zExpUK8aDH5XZT1Kk223lc6dif8C4J4bXDj0JAAsG7WwaNj40dG0Kc3ilQsyXILr+PlwIeSDmimx8685cDLzldZ95I+xFPVaxowqwKvEXTxhcycPWGc+lssa6rQiw/VXbYmIYrD/oo3dhPcSr/DUw9U8GCzSDnRAzbOAI7FEdGBrPsq742J5vBOUwj6AjgRy6CaCGqMv+zhNfvUTan2fEBJ9tKdP84wWqWuLPPHeJeMo4c7f79VVHlaP4XeyPiS5/FZ3eMItkm+i6B6XmfSvJNWiTci50dUdfjke7c1dxoJQ4WJN5JRWusrJcuVgtESg8CxZvSxPEQCPdyP/N6ty/sO2d8eVxSJqwrkzWUCdR/H3MDGrkhucbm0GeEEaK251+lMTmSX4wvQhzkYzKUTWbBjcWk6EJzIjdraV5brz8TN3Gxrj+TCFta+6qJOfUC1KJcX4SwS4rs8lo8LpHjIxGYsUF4dw0U=) 2025-08-29 20:30:26.295395 | orchestrator | 2025-08-29 20:30:26.295407 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:26.295419 | orchestrator | Friday 29 August 2025 20:30:22 +0000 (0:00:01.000) 0:00:09.065 ********* 2025-08-29 20:30:26.295430 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVk3NHHF874SEdS78ABKlY5FqVr5EpS2gX/n9fPFiM4DzEslijfNoBtVGDqeDLrf5+E/swkcvbdYLTrSXQ2iOddwpo31upsOqevvSPbpt9AwWX1I6uuRXFzduDl4peiAhUVnpZ/+YyPIeoz2WVgEofjaOWkwoln3g0+fEPbAC0YCRZOoWZ5LlNLqUEXR7LYeF45zxRBsJ2JrjbNGDX/hlQ8F4KtURw8v49qG/Aeg07OAJp5b02HaVB8dGuv/ftRVaMJPfLgBtJ8/yG9JjlHyqMMkujK8R7W/g9rlaJY7SPzRmCZbfg6Oj3g5akaj7ws7hZCOl7BvX+fx9MfiZYuuJ2FyxLY5eabCXtG1BsP6eyYUunVpbvT67yVBtRcsmNGKFzliRiU5dHBXFpW2x+f4qnXHxuYeuylH//9IO+4qnMQsAOdQrDPoDQh8AB2D7cgtqlyWoHweUjimpxidwM8t13eAmvYeKJqEREAkDxttdctt+G0pN/XOuS/UMU9M2wmLU=) 2025-08-29 20:30:26.295441 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOWXmCyQQQKMi4x6KWXsI/K9jj/7FBs9hdqBNkhAx/0VKIUOK+VhEr/J5nnVAJaQm7+7t0WxUBia71ATP9ICVf4=) 2025-08-29 20:30:26.295452 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBkkkpbcbrneu92k498f+LtXM+R2X0RebKGE9GnkFBO6) 2025-08-29 20:30:26.295473 | orchestrator | 2025-08-29 20:30:26.295484 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:26.295495 | orchestrator | Friday 29 August 2025 20:30:23 +0000 (0:00:01.013) 0:00:10.079 ********* 2025-08-29 20:30:26.295506 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9ZkIiouxNFSt3E3EcTV6DJn6BhWos6RkJne+13BOPHhZjfHJw6I6hfn6xY/xKGPCIci13tcpwLXJJOJXDOflGXieq5gdsDyNLgNxjpaMJkmNFtw3EPlL+HCFDTkW+P8o6CdVCQ26Sp0ojqWnWqkXTJPm7W3vwgkh0Vf5M54MKnAPfNCstvWQTWWl8kzxPHFNmVQKvnWjp7xQrQ0F5OonmwT6kyJzAUGrKJzpcQ+tv0UG8i2gS9MXTH+8KXrMkjCm59k2xPZ42vQPrTIZJX1RJczNO3Pxy9EHLLRPqkaLEpANcLWl2x9XuUDrssq/DIX0S3+m5k4CHUQARr7lpG2Eon4bc1mbZCAX5z/F+ENuSLS97+IxveT2kvPntdPexgwKJKY3Z/VSDLwqW+soG36JydU5kTtg3rgba9mzxAHr0fxm3o6E6ecqXLV1NwpIBra9bcTbyU3mZ1DvZgu3eBhsY2L4+f8yl9DBmiAEHSFd2Xx7UyS3WIFXpCPowbL2a2Bs=) 2025-08-29 20:30:26.295525 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM+jWCzDUto06Kw0Sjh6XR5gcYxg6SXyhGEIUTc/B1Mz5T9Jlfzz8ehlLPe//fgtOlcF5D4NWk4HzqbBA6uW7cY=) 2025-08-29 20:30:26.295536 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJLUxARbBq1qpkKTQqd8iSuFS5jiPhiL9KkKPPYaLYm) 2025-08-29 20:30:26.295547 | orchestrator | 2025-08-29 20:30:26.295558 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:26.295569 | orchestrator | Friday 29 August 2025 20:30:25 +0000 (0:00:02.022) 0:00:12.102 ********* 2025-08-29 20:30:26.295587 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLPA9rbvEEvODylwYP43Ce6Tva04XGRAblcEc2i/3KMmvP0ynIVC4pv/kFiJn7BoZG9wlWDC5inymaFcV83k6J8=) 2025-08-29 20:30:36.753966 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhJsYV/ypjUIxIeYZAydR6+1BeoIITXq3uEXWcqPXpetmG0X5SNvxtT73mNLjASuIQh54QOVYyX6Jz/IRXEte/pwucYWsTIcu9xnVApS1/EmPnJHbEMkJ9KulSxPb+ZRdmSfjJ4aHwyLiVzrAYlsbgG3KUuB6d2+3COfvRt/qD664KaFZV0R5jWXkXJk7pfHEB5wcZkwH1rh0YsIwQsEyGOvsanCRkxcW+7jnkXFhMOdpJUoofVccoOJ3Hv12owaeUNsvZ7dLrBFzS6SeRHExuZ/7F868GiWh+w1PeZLWLwwD5jizfU05IMc3QhsBp5onag+XwXtFa+iPSJYV+MbmdT75X0fu0YMETJGL+Q1siHQ1Yty7imREp9cbdSupQV7kNjgL51naz//3setUhmg8iLjMk6se4ya588kuizBvGzXVteT1p9pLdFcr0mRHRJTMUiFbZ0ZKWzPB27BtY65o/GMFMJHtQhImqIPR/YZ7d9UoG5yN80nqLP2YP12qt5h8=) 2025-08-29 20:30:36.754137 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDJUtnM2G5esjkO9swgIEfHHvuSByZJ48DuvED/cA+PH) 2025-08-29 20:30:36.754156 | orchestrator | 2025-08-29 20:30:36.754169 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:36.754182 | orchestrator | Friday 29 August 2025 20:30:26 +0000 (0:00:01.039) 0:00:13.141 ********* 2025-08-29 20:30:36.754194 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUR/cpluXB/PHmDt2fFYpfKEylGiBc2MrCs26VMnLdx2AfSnklFpTAqHWoFCkKcTmTiS0j1nYZ66PPe407lLvNeRjyRf369W1R0yM600jnycw9rzI9g9pek7DGfWxAAWfj47CUmIthTjlYpE7WO3qhNeX9WsCe3Afp3qU1VOgMyFaFyAaenItpjc3mqPg8TDiKLW75NRqaicnosO9UC9RQtY+jq4ppBXklx5oUNoAUn1m6HKB6ezMlx2BJ9Z/ak1ZUmiiwhec0AzhSTYH2R65eEJGAUM7/rBtQxbNiNCcWdDirePe25c7S/a2uP6aI8kR6CQ49hNGOA2jI4uYmFYRnNZaZRy0l1z+ka8qbrLXoIfXvaS2EjtHko+CrciQQZQopfbv/E8krwti9J0tdHksQQrAU59Js8BJTOhvO6VusdeZ7dd/hrgnyH2Aj4ChiN4Ahemkx4TODRq5rIIjEkzwKWEum99GNEccXzkRvedgvis8ek/twIlz0el2HnkaD2ds=) 2025-08-29 20:30:36.754207 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKXUGlxofv053fjmmtzXTuHSj+yeAX9u9ivOW+zYBaAhO6MiCOeuYXX7cyrPyEeigWm/y2fjs6ohDukANC6C4KA=) 2025-08-29 20:30:36.754220 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBZtsQ9brbrlwUj37QHR+i8zyYOUqCGwm/IXadb3yzJV) 2025-08-29 20:30:36.754231 | orchestrator | 2025-08-29 20:30:36.754243 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-08-29 20:30:36.754279 | orchestrator | Friday 29 August 2025 20:30:27 +0000 (0:00:01.054) 0:00:14.196 ********* 2025-08-29 20:30:36.754292 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 20:30:36.754304 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 20:30:36.754314 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 20:30:36.754325 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 20:30:36.754336 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 20:30:36.754347 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 20:30:36.754359 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 20:30:36.754370 | orchestrator | 2025-08-29 20:30:36.754381 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-08-29 20:30:36.754394 | orchestrator | Friday 29 August 2025 20:30:32 +0000 (0:00:05.218) 0:00:19.414 ********* 2025-08-29 20:30:36.754406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 20:30:36.754419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 20:30:36.754431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 20:30:36.754442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 20:30:36.754453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 20:30:36.754465 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 20:30:36.754479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 20:30:36.754491 | orchestrator | 2025-08-29 20:30:36.754521 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:36.754534 | orchestrator | Friday 29 August 2025 20:30:32 +0000 (0:00:00.159) 0:00:19.573 ********* 2025-08-29 20:30:36.754546 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHY+lRNtPcjJ5WS+oMNvjsasMVV2WA2UXTjdGPrWXLpq) 2025-08-29 20:30:36.754581 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4ill8yHUDgIECCDb37wqjLEPGi3EN84tX9eE8xqhcoNk3jx30IwKT5/jN6josmjJcCHY/ZehR47sC9mlPwAaJkd6xzmH/bEMiIqGOD9FthvkHqemVSfImj9132uKq2nApEDXQsr5Nk6jFfMQHQAiOzkRJvXznAWlPSzuPwAkfmocn+QF1CUIRLVeDsEHMEsdS5Q41z58nSUrKk8amvNaa9+SG4w/hFoWm5d2cFHzJOJT8/v7LfoDDML3gFWSZ8PTyZKl+5+znhrnRyV/fs01zdza83eMftO5aSzT+DqpTLx9fKcCoZrrEqx1ZTHVVKojTKfk4g2jKH7Rl5MxRYA6VVZOVqMKjnXA5ua+n341y5XvEP30aQY8Z6vOPFe8R2/TMHUQdyz5pRpRcIDzoYvARNnYAkLdltV7ueUBYBJ3JDDOB6cng8OUoZ0miD7OuoUzG614Twa2egoqIXV+yRlbmC2tQ5rdfX8mng/7G4f4JKcHU1JPickxjhkNfbkOgB/s=) 2025-08-29 20:30:36.754595 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLw0WsWH1bSjLSvoXNC4ZPObPR2MtYWJsE/yz3DIiYdryBRwdGjILzz14u9/9CBOxu4VsBFT0wk/T0W5J6PFg1M=) 2025-08-29 20:30:36.754607 | orchestrator | 2025-08-29 20:30:36.754620 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:36.754632 | orchestrator | Friday 29 August 2025 20:30:33 +0000 (0:00:00.997) 0:00:20.571 ********* 2025-08-29 20:30:36.754653 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGGo1K/M78ePWBdgwhGr0krxPXYrAaQ4A9wncsmfknvKvI06PayT7XLDCRNP1tU3lHe0aL8DVh4DE7RF5UGjZgIdrQi7T7adUm8Dh/ZLtbtCm2kjqVcegtVB2839eS6NG8CT6ozcVuiPg1N7GTZige3skahmvUBzGoTktB0tVSKn3m4AAvwfJLC2Es+D4jjzOos/A4HcGmQD0Esj9WrrTxnpEOtZzCA9EbnlLosVWAMfhQja+2iQaxTjqvGXzFmgo8rT+BCp/mWTgnBuMPXtNqoijdEMfsLYqNinevwYQVtXCRoTEGvdSbGPrjQr/xKZWoFqBkUDue7yLwn1M6QQ0wt2UDCcSqcad/VyTK7mvp5DdF+acI4ULAWTxIe8Nq820Al1I9+isoaosL5lfi8GvzCY+lsb4rnBMeLCXhHNxOa6hvzqG1xnVKcFzl2ofRnmFcZ3EQq+68HQbvG5NPkqlUpv2U0liEXZYEv5CijVuez0fHNP5YOyf9EkwEEnWHOG0=) 2025-08-29 20:30:36.754666 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKMuPc7C6/Z3KzUNXZEyk9FJI6HKzhLC8Qx4vnIAAG4O) 2025-08-29 20:30:36.754678 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEckS/poL1e0ULB05yliI6RLilFHDOuVrBE6DSiMj9eX//OlF17dr8d6TInPu63GZDeogjWnb98o+BL84ve9SBg=) 2025-08-29 20:30:36.754690 | orchestrator | 2025-08-29 20:30:36.754703 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:36.754715 | orchestrator | Friday 29 August 2025 20:30:34 +0000 (0:00:01.016) 0:00:21.587 ********* 2025-08-29 20:30:36.754728 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCYzLi2CGD9FIhUMHFDMZl5bKpDcn5wtpsu4zExpUK8aDH5XZT1Kk223lc6dif8C4J4bXDj0JAAsG7WwaNj40dG0Kc3ilQsyXILr+PlwIeSDmimx8685cDLzldZ95I+xFPVaxowqwKvEXTxhcycPWGc+lssa6rQiw/VXbYmIYrD/oo3dhPcSr/DUw9U8GCzSDnRAzbOAI7FEdGBrPsq742J5vBOUwj6AjgRy6CaCGqMv+zhNfvUTan2fEBJ9tKdP84wWqWuLPPHeJeMo4c7f79VVHlaP4XeyPiS5/FZ3eMItkm+i6B6XmfSvJNWiTci50dUdfjke7c1dxoJQ4WJN5JRWusrJcuVgtESg8CxZvSxPEQCPdyP/N6ty/sO2d8eVxSJqwrkzWUCdR/H3MDGrkhucbm0GeEEaK251+lMTmSX4wvQhzkYzKUTWbBjcWk6EJzIjdraV5brz8TN3Gxrj+TCFta+6qJOfUC1KJcX4SwS4rs8lo8LpHjIxGYsUF4dw0U=) 2025-08-29 20:30:36.754741 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGIPpK2mkGbUzlTanj3zrbygoImi+DBWBN4FTSqdbbYt8NkChdqMxk4iEZgmPAsdpZZ3pH4fE6hkIN1YlAe9t+8=) 2025-08-29 20:30:36.754753 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILhhT2AMXVc/13xBwvLqMvQDYbrgwgvkrjQJpqrYg3wr) 2025-08-29 20:30:36.754765 | orchestrator | 2025-08-29 20:30:36.754777 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:36.754790 | orchestrator | Friday 29 August 2025 20:30:35 +0000 (0:00:01.000) 0:00:22.588 ********* 2025-08-29 20:30:36.754802 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOWXmCyQQQKMi4x6KWXsI/K9jj/7FBs9hdqBNkhAx/0VKIUOK+VhEr/J5nnVAJaQm7+7t0WxUBia71ATP9ICVf4=) 2025-08-29 20:30:36.754833 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVk3NHHF874SEdS78ABKlY5FqVr5EpS2gX/n9fPFiM4DzEslijfNoBtVGDqeDLrf5+E/swkcvbdYLTrSXQ2iOddwpo31upsOqevvSPbpt9AwWX1I6uuRXFzduDl4peiAhUVnpZ/+YyPIeoz2WVgEofjaOWkwoln3g0+fEPbAC0YCRZOoWZ5LlNLqUEXR7LYeF45zxRBsJ2JrjbNGDX/hlQ8F4KtURw8v49qG/Aeg07OAJp5b02HaVB8dGuv/ftRVaMJPfLgBtJ8/yG9JjlHyqMMkujK8R7W/g9rlaJY7SPzRmCZbfg6Oj3g5akaj7ws7hZCOl7BvX+fx9MfiZYuuJ2FyxLY5eabCXtG1BsP6eyYUunVpbvT67yVBtRcsmNGKFzliRiU5dHBXFpW2x+f4qnXHxuYeuylH//9IO+4qnMQsAOdQrDPoDQh8AB2D7cgtqlyWoHweUjimpxidwM8t13eAmvYeKJqEREAkDxttdctt+G0pN/XOuS/UMU9M2wmLU=) 2025-08-29 20:30:40.774949 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBkkkpbcbrneu92k498f+LtXM+R2X0RebKGE9GnkFBO6) 2025-08-29 20:30:40.775037 | orchestrator | 2025-08-29 20:30:40.775048 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:40.775057 | orchestrator | Friday 29 August 2025 20:30:36 +0000 (0:00:01.011) 0:00:23.600 ********* 2025-08-29 20:30:40.775066 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM+jWCzDUto06Kw0Sjh6XR5gcYxg6SXyhGEIUTc/B1Mz5T9Jlfzz8ehlLPe//fgtOlcF5D4NWk4HzqbBA6uW7cY=) 2025-08-29 20:30:40.775096 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9ZkIiouxNFSt3E3EcTV6DJn6BhWos6RkJne+13BOPHhZjfHJw6I6hfn6xY/xKGPCIci13tcpwLXJJOJXDOflGXieq5gdsDyNLgNxjpaMJkmNFtw3EPlL+HCFDTkW+P8o6CdVCQ26Sp0ojqWnWqkXTJPm7W3vwgkh0Vf5M54MKnAPfNCstvWQTWWl8kzxPHFNmVQKvnWjp7xQrQ0F5OonmwT6kyJzAUGrKJzpcQ+tv0UG8i2gS9MXTH+8KXrMkjCm59k2xPZ42vQPrTIZJX1RJczNO3Pxy9EHLLRPqkaLEpANcLWl2x9XuUDrssq/DIX0S3+m5k4CHUQARr7lpG2Eon4bc1mbZCAX5z/F+ENuSLS97+IxveT2kvPntdPexgwKJKY3Z/VSDLwqW+soG36JydU5kTtg3rgba9mzxAHr0fxm3o6E6ecqXLV1NwpIBra9bcTbyU3mZ1DvZgu3eBhsY2L4+f8yl9DBmiAEHSFd2Xx7UyS3WIFXpCPowbL2a2Bs=) 2025-08-29 20:30:40.775107 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJLUxARbBq1qpkKTQqd8iSuFS5jiPhiL9KkKPPYaLYm) 2025-08-29 20:30:40.775115 | orchestrator | 2025-08-29 20:30:40.775122 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:40.775130 | orchestrator | Friday 29 August 2025 20:30:37 +0000 (0:00:01.042) 0:00:24.643 ********* 2025-08-29 20:30:40.775137 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLPA9rbvEEvODylwYP43Ce6Tva04XGRAblcEc2i/3KMmvP0ynIVC4pv/kFiJn7BoZG9wlWDC5inymaFcV83k6J8=) 2025-08-29 20:30:40.775145 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhJsYV/ypjUIxIeYZAydR6+1BeoIITXq3uEXWcqPXpetmG0X5SNvxtT73mNLjASuIQh54QOVYyX6Jz/IRXEte/pwucYWsTIcu9xnVApS1/EmPnJHbEMkJ9KulSxPb+ZRdmSfjJ4aHwyLiVzrAYlsbgG3KUuB6d2+3COfvRt/qD664KaFZV0R5jWXkXJk7pfHEB5wcZkwH1rh0YsIwQsEyGOvsanCRkxcW+7jnkXFhMOdpJUoofVccoOJ3Hv12owaeUNsvZ7dLrBFzS6SeRHExuZ/7F868GiWh+w1PeZLWLwwD5jizfU05IMc3QhsBp5onag+XwXtFa+iPSJYV+MbmdT75X0fu0YMETJGL+Q1siHQ1Yty7imREp9cbdSupQV7kNjgL51naz//3setUhmg8iLjMk6se4ya588kuizBvGzXVteT1p9pLdFcr0mRHRJTMUiFbZ0ZKWzPB27BtY65o/GMFMJHtQhImqIPR/YZ7d9UoG5yN80nqLP2YP12qt5h8=) 2025-08-29 20:30:40.775166 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDJUtnM2G5esjkO9swgIEfHHvuSByZJ48DuvED/cA+PH) 2025-08-29 20:30:40.775174 | orchestrator | 2025-08-29 20:30:40.775181 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 20:30:40.775189 | orchestrator | Friday 29 August 2025 20:30:38 +0000 (0:00:01.017) 0:00:25.661 ********* 2025-08-29 20:30:40.775196 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUR/cpluXB/PHmDt2fFYpfKEylGiBc2MrCs26VMnLdx2AfSnklFpTAqHWoFCkKcTmTiS0j1nYZ66PPe407lLvNeRjyRf369W1R0yM600jnycw9rzI9g9pek7DGfWxAAWfj47CUmIthTjlYpE7WO3qhNeX9WsCe3Afp3qU1VOgMyFaFyAaenItpjc3mqPg8TDiKLW75NRqaicnosO9UC9RQtY+jq4ppBXklx5oUNoAUn1m6HKB6ezMlx2BJ9Z/ak1ZUmiiwhec0AzhSTYH2R65eEJGAUM7/rBtQxbNiNCcWdDirePe25c7S/a2uP6aI8kR6CQ49hNGOA2jI4uYmFYRnNZaZRy0l1z+ka8qbrLXoIfXvaS2EjtHko+CrciQQZQopfbv/E8krwti9J0tdHksQQrAU59Js8BJTOhvO6VusdeZ7dd/hrgnyH2Aj4ChiN4Ahemkx4TODRq5rIIjEkzwKWEum99GNEccXzkRvedgvis8ek/twIlz0el2HnkaD2ds=) 2025-08-29 20:30:40.775204 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKXUGlxofv053fjmmtzXTuHSj+yeAX9u9ivOW+zYBaAhO6MiCOeuYXX7cyrPyEeigWm/y2fjs6ohDukANC6C4KA=) 2025-08-29 20:30:40.775212 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBZtsQ9brbrlwUj37QHR+i8zyYOUqCGwm/IXadb3yzJV) 2025-08-29 20:30:40.775219 | orchestrator | 2025-08-29 20:30:40.775226 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-08-29 20:30:40.775234 | orchestrator | Friday 29 August 2025 20:30:39 +0000 (0:00:01.016) 0:00:26.678 ********* 2025-08-29 20:30:40.775242 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 20:30:40.775249 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 20:30:40.775256 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 20:30:40.775269 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 20:30:40.775276 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 20:30:40.775297 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 20:30:40.775305 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 20:30:40.775312 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:30:40.775320 | orchestrator | 2025-08-29 20:30:40.775327 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-08-29 20:30:40.775334 | orchestrator | Friday 29 August 2025 20:30:39 +0000 (0:00:00.162) 0:00:26.840 ********* 2025-08-29 20:30:40.775341 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:30:40.775349 | orchestrator | 2025-08-29 20:30:40.775356 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-08-29 20:30:40.775367 | orchestrator | Friday 29 August 2025 20:30:40 +0000 (0:00:00.059) 0:00:26.900 ********* 2025-08-29 20:30:40.775374 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:30:40.775382 | orchestrator | 2025-08-29 20:30:40.775389 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-08-29 20:30:40.775396 | orchestrator | Friday 29 August 2025 20:30:40 +0000 (0:00:00.040) 0:00:26.940 ********* 2025-08-29 20:30:40.775403 | orchestrator | changed: [testbed-manager] 2025-08-29 20:30:40.775410 | orchestrator | 2025-08-29 20:30:40.775417 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:30:40.775425 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 20:30:40.775432 | orchestrator | 2025-08-29 20:30:40.775440 | orchestrator | 2025-08-29 20:30:40.775447 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:30:40.775454 | orchestrator | Friday 29 August 2025 20:30:40 +0000 (0:00:00.462) 0:00:27.403 ********* 2025-08-29 20:30:40.775461 | orchestrator | =============================================================================== 2025-08-29 20:30:40.775470 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.57s 2025-08-29 20:30:40.775478 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.22s 2025-08-29 20:30:40.775487 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.02s 2025-08-29 20:30:40.775495 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 20:30:40.775503 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-08-29 20:30:40.775511 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-08-29 20:30:40.775520 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-08-29 20:30:40.775528 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-08-29 20:30:40.775536 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-08-29 20:30:40.775544 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-08-29 20:30:40.775552 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-08-29 20:30:40.775560 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-08-29 20:30:40.775568 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-08-29 20:30:40.775576 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-08-29 20:30:40.775585 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-08-29 20:30:40.775593 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-08-29 20:30:40.775601 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.46s 2025-08-29 20:30:40.775609 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-08-29 20:30:40.775622 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-08-29 20:30:40.775631 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-08-29 20:30:41.043605 | orchestrator | + osism apply squid 2025-08-29 20:30:52.916900 | orchestrator | 2025-08-29 20:30:52 | INFO  | Task f7bec956-097c-4274-bbb4-02f1b760a0a9 (squid) was prepared for execution. 2025-08-29 20:30:52.917064 | orchestrator | 2025-08-29 20:30:52 | INFO  | It takes a moment until task f7bec956-097c-4274-bbb4-02f1b760a0a9 (squid) has been started and output is visible here. 2025-08-29 20:32:45.618848 | orchestrator | 2025-08-29 20:32:45.618963 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-08-29 20:32:45.618981 | orchestrator | 2025-08-29 20:32:45.618993 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-08-29 20:32:45.619005 | orchestrator | Friday 29 August 2025 20:30:56 +0000 (0:00:00.120) 0:00:00.120 ********* 2025-08-29 20:32:45.619016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 20:32:45.619029 | orchestrator | 2025-08-29 20:32:45.619040 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-08-29 20:32:45.619051 | orchestrator | Friday 29 August 2025 20:30:56 +0000 (0:00:00.065) 0:00:00.185 ********* 2025-08-29 20:32:45.619062 | orchestrator | ok: [testbed-manager] 2025-08-29 20:32:45.619075 | orchestrator | 2025-08-29 20:32:45.619086 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-08-29 20:32:45.619097 | orchestrator | Friday 29 August 2025 20:30:57 +0000 (0:00:01.092) 0:00:01.278 ********* 2025-08-29 20:32:45.619108 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-08-29 20:32:45.619168 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-08-29 20:32:45.619180 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-08-29 20:32:45.619191 | orchestrator | 2025-08-29 20:32:45.619202 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-08-29 20:32:45.619213 | orchestrator | Friday 29 August 2025 20:30:58 +0000 (0:00:00.989) 0:00:02.268 ********* 2025-08-29 20:32:45.619224 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-08-29 20:32:45.619235 | orchestrator | 2025-08-29 20:32:45.619246 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-08-29 20:32:45.619258 | orchestrator | Friday 29 August 2025 20:30:59 +0000 (0:00:00.926) 0:00:03.194 ********* 2025-08-29 20:32:45.619269 | orchestrator | ok: [testbed-manager] 2025-08-29 20:32:45.619287 | orchestrator | 2025-08-29 20:32:45.619305 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-08-29 20:32:45.619324 | orchestrator | Friday 29 August 2025 20:30:59 +0000 (0:00:00.313) 0:00:03.507 ********* 2025-08-29 20:32:45.619341 | orchestrator | changed: [testbed-manager] 2025-08-29 20:32:45.619359 | orchestrator | 2025-08-29 20:32:45.619377 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-08-29 20:32:45.619426 | orchestrator | Friday 29 August 2025 20:31:00 +0000 (0:00:00.829) 0:00:04.337 ********* 2025-08-29 20:32:45.619447 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-08-29 20:32:45.619468 | orchestrator | ok: [testbed-manager] 2025-08-29 20:32:45.619488 | orchestrator | 2025-08-29 20:32:45.619509 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-08-29 20:32:45.619529 | orchestrator | Friday 29 August 2025 20:31:32 +0000 (0:00:32.130) 0:00:36.467 ********* 2025-08-29 20:32:45.619549 | orchestrator | changed: [testbed-manager] 2025-08-29 20:32:45.619562 | orchestrator | 2025-08-29 20:32:45.619574 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-08-29 20:32:45.619587 | orchestrator | Friday 29 August 2025 20:31:44 +0000 (0:00:11.842) 0:00:48.309 ********* 2025-08-29 20:32:45.619599 | orchestrator | Pausing for 60 seconds 2025-08-29 20:32:45.619640 | orchestrator | changed: [testbed-manager] 2025-08-29 20:32:45.619653 | orchestrator | 2025-08-29 20:32:45.619665 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-08-29 20:32:45.619678 | orchestrator | Friday 29 August 2025 20:32:44 +0000 (0:01:00.072) 0:01:48.382 ********* 2025-08-29 20:32:45.619690 | orchestrator | ok: [testbed-manager] 2025-08-29 20:32:45.619703 | orchestrator | 2025-08-29 20:32:45.619738 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-08-29 20:32:45.619757 | orchestrator | Friday 29 August 2025 20:32:44 +0000 (0:00:00.052) 0:01:48.434 ********* 2025-08-29 20:32:45.619775 | orchestrator | changed: [testbed-manager] 2025-08-29 20:32:45.619788 | orchestrator | 2025-08-29 20:32:45.619799 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:32:45.619810 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:32:45.619822 | orchestrator | 2025-08-29 20:32:45.619832 | orchestrator | 2025-08-29 20:32:45.619843 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:32:45.619854 | orchestrator | Friday 29 August 2025 20:32:45 +0000 (0:00:00.617) 0:01:49.052 ********* 2025-08-29 20:32:45.619865 | orchestrator | =============================================================================== 2025-08-29 20:32:45.619876 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-08-29 20:32:45.619887 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.13s 2025-08-29 20:32:45.619897 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.84s 2025-08-29 20:32:45.619909 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.09s 2025-08-29 20:32:45.619920 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.99s 2025-08-29 20:32:45.619930 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.93s 2025-08-29 20:32:45.619941 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.83s 2025-08-29 20:32:45.619952 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.62s 2025-08-29 20:32:45.619962 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2025-08-29 20:32:45.619973 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-08-29 20:32:45.619984 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.05s 2025-08-29 20:32:45.918964 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 20:32:45.919034 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-08-29 20:32:45.924466 | orchestrator | ++ semver 9.2.0 9.0.0 2025-08-29 20:32:45.982203 | orchestrator | + [[ 1 -lt 0 ]] 2025-08-29 20:32:45.982292 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-08-29 20:32:58.015054 | orchestrator | 2025-08-29 20:32:58 | INFO  | Task f67db234-6e9a-4437-a39e-4ae45a8d79a3 (operator) was prepared for execution. 2025-08-29 20:32:58.015218 | orchestrator | 2025-08-29 20:32:58 | INFO  | It takes a moment until task f67db234-6e9a-4437-a39e-4ae45a8d79a3 (operator) has been started and output is visible here. 2025-08-29 20:33:12.840987 | orchestrator | 2025-08-29 20:33:12.841110 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-08-29 20:33:12.841127 | orchestrator | 2025-08-29 20:33:12.841203 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 20:33:12.841216 | orchestrator | Friday 29 August 2025 20:33:01 +0000 (0:00:00.110) 0:00:00.110 ********* 2025-08-29 20:33:12.841227 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:33:12.841240 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:33:12.841251 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:33:12.841262 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:33:12.841273 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:33:12.841307 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:33:12.841319 | orchestrator | 2025-08-29 20:33:12.841330 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-08-29 20:33:12.841341 | orchestrator | Friday 29 August 2025 20:33:05 +0000 (0:00:03.425) 0:00:03.535 ********* 2025-08-29 20:33:12.841352 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:33:12.841363 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:33:12.841389 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:33:12.841411 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:33:12.841422 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:33:12.841433 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:33:12.841444 | orchestrator | 2025-08-29 20:33:12.841455 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-08-29 20:33:12.841465 | orchestrator | 2025-08-29 20:33:12.841476 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 20:33:12.841487 | orchestrator | Friday 29 August 2025 20:33:05 +0000 (0:00:00.627) 0:00:04.162 ********* 2025-08-29 20:33:12.841498 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:33:12.841511 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:33:12.841524 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:33:12.841536 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:33:12.841548 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:33:12.841561 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:33:12.841573 | orchestrator | 2025-08-29 20:33:12.841585 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 20:33:12.841597 | orchestrator | Friday 29 August 2025 20:33:05 +0000 (0:00:00.137) 0:00:04.300 ********* 2025-08-29 20:33:12.841610 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:33:12.841622 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:33:12.841634 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:33:12.841647 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:33:12.841659 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:33:12.841671 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:33:12.841683 | orchestrator | 2025-08-29 20:33:12.841696 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 20:33:12.841708 | orchestrator | Friday 29 August 2025 20:33:05 +0000 (0:00:00.149) 0:00:04.450 ********* 2025-08-29 20:33:12.841721 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:33:12.841734 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:33:12.841746 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:33:12.841758 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:33:12.841770 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:33:12.841783 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:33:12.841796 | orchestrator | 2025-08-29 20:33:12.841812 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 20:33:12.841832 | orchestrator | Friday 29 August 2025 20:33:06 +0000 (0:00:00.549) 0:00:05.000 ********* 2025-08-29 20:33:12.841851 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:33:12.841871 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:33:12.841890 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:33:12.841902 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:33:12.841913 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:33:12.841924 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:33:12.841934 | orchestrator | 2025-08-29 20:33:12.841946 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 20:33:12.841957 | orchestrator | Friday 29 August 2025 20:33:07 +0000 (0:00:00.749) 0:00:05.749 ********* 2025-08-29 20:33:12.841968 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-08-29 20:33:12.841979 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-08-29 20:33:12.841989 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-08-29 20:33:12.842000 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-08-29 20:33:12.842011 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-08-29 20:33:12.842084 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-08-29 20:33:12.842095 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-08-29 20:33:12.842116 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-08-29 20:33:12.842152 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-08-29 20:33:12.842166 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-08-29 20:33:12.842177 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-08-29 20:33:12.842188 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-08-29 20:33:12.842199 | orchestrator | 2025-08-29 20:33:12.842213 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 20:33:12.842225 | orchestrator | Friday 29 August 2025 20:33:08 +0000 (0:00:01.119) 0:00:06.869 ********* 2025-08-29 20:33:12.842236 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:33:12.842247 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:33:12.842258 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:33:12.842270 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:33:12.842280 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:33:12.842291 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:33:12.842303 | orchestrator | 2025-08-29 20:33:12.842314 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 20:33:12.842325 | orchestrator | Friday 29 August 2025 20:33:09 +0000 (0:00:01.221) 0:00:08.091 ********* 2025-08-29 20:33:12.842336 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-08-29 20:33:12.842348 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-08-29 20:33:12.842359 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-08-29 20:33:12.842370 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 20:33:12.842400 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 20:33:12.842412 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 20:33:12.842423 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 20:33:12.842434 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 20:33:12.842444 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 20:33:12.842455 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-08-29 20:33:12.842466 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-08-29 20:33:12.842495 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-08-29 20:33:12.842507 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-08-29 20:33:12.842518 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-08-29 20:33:12.842529 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-08-29 20:33:12.842540 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-08-29 20:33:12.842555 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-08-29 20:33:12.842567 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-08-29 20:33:12.842578 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-08-29 20:33:12.842589 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-08-29 20:33:12.842600 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-08-29 20:33:12.842611 | orchestrator | 2025-08-29 20:33:12.842622 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 20:33:12.842633 | orchestrator | Friday 29 August 2025 20:33:10 +0000 (0:00:01.202) 0:00:09.293 ********* 2025-08-29 20:33:12.842644 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:33:12.842655 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:33:12.842666 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:33:12.842677 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:33:12.842687 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:33:12.842705 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:33:12.842716 | orchestrator | 2025-08-29 20:33:12.842727 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 20:33:12.842738 | orchestrator | Friday 29 August 2025 20:33:10 +0000 (0:00:00.150) 0:00:09.444 ********* 2025-08-29 20:33:12.842749 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:33:12.842760 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:33:12.842771 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:33:12.842781 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:33:12.842792 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:33:12.842803 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:33:12.842814 | orchestrator | 2025-08-29 20:33:12.842824 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 20:33:12.842835 | orchestrator | Friday 29 August 2025 20:33:11 +0000 (0:00:00.541) 0:00:09.985 ********* 2025-08-29 20:33:12.842847 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:33:12.842866 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:33:12.842884 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:33:12.842904 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:33:12.842923 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:33:12.842942 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:33:12.842960 | orchestrator | 2025-08-29 20:33:12.842972 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 20:33:12.842983 | orchestrator | Friday 29 August 2025 20:33:11 +0000 (0:00:00.159) 0:00:10.144 ********* 2025-08-29 20:33:12.842994 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 20:33:12.843005 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:33:12.843015 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 20:33:12.843026 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:33:12.843037 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 20:33:12.843047 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 20:33:12.843058 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:33:12.843069 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 20:33:12.843080 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:33:12.843091 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:33:12.843101 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 20:33:12.843112 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:33:12.843123 | orchestrator | 2025-08-29 20:33:12.843182 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 20:33:12.843194 | orchestrator | Friday 29 August 2025 20:33:12 +0000 (0:00:00.706) 0:00:10.851 ********* 2025-08-29 20:33:12.843205 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:33:12.843216 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:33:12.843227 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:33:12.843238 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:33:12.843249 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:33:12.843260 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:33:12.843271 | orchestrator | 2025-08-29 20:33:12.843282 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 20:33:12.843293 | orchestrator | Friday 29 August 2025 20:33:12 +0000 (0:00:00.156) 0:00:11.008 ********* 2025-08-29 20:33:12.843303 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:33:12.843314 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:33:12.843325 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:33:12.843336 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:33:12.843347 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:33:12.843357 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:33:12.843368 | orchestrator | 2025-08-29 20:33:12.843379 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 20:33:12.843390 | orchestrator | Friday 29 August 2025 20:33:12 +0000 (0:00:00.147) 0:00:11.155 ********* 2025-08-29 20:33:12.843401 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:33:12.843421 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:33:12.843432 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:33:12.843443 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:33:12.843462 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:33:13.903512 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:33:13.903596 | orchestrator | 2025-08-29 20:33:13.903607 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 20:33:13.903618 | orchestrator | Friday 29 August 2025 20:33:12 +0000 (0:00:00.143) 0:00:11.299 ********* 2025-08-29 20:33:13.903626 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:33:13.903634 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:33:13.903642 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:33:13.903649 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:33:13.903657 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:33:13.903665 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:33:13.903673 | orchestrator | 2025-08-29 20:33:13.903681 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 20:33:13.903689 | orchestrator | Friday 29 August 2025 20:33:13 +0000 (0:00:00.627) 0:00:11.927 ********* 2025-08-29 20:33:13.903696 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:33:13.903704 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:33:13.903712 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:33:13.903719 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:33:13.903727 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:33:13.903735 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:33:13.903743 | orchestrator | 2025-08-29 20:33:13.903750 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:33:13.903759 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:33:13.903769 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:33:13.903777 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:33:13.903785 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:33:13.903792 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:33:13.903800 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:33:13.903808 | orchestrator | 2025-08-29 20:33:13.903816 | orchestrator | 2025-08-29 20:33:13.903823 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:33:13.903831 | orchestrator | Friday 29 August 2025 20:33:13 +0000 (0:00:00.211) 0:00:12.138 ********* 2025-08-29 20:33:13.903839 | orchestrator | =============================================================================== 2025-08-29 20:33:13.903847 | orchestrator | Gathering Facts --------------------------------------------------------- 3.43s 2025-08-29 20:33:13.903854 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.22s 2025-08-29 20:33:13.903862 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.20s 2025-08-29 20:33:13.903870 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.12s 2025-08-29 20:33:13.903878 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.75s 2025-08-29 20:33:13.903885 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2025-08-29 20:33:13.903893 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-08-29 20:33:13.903922 | orchestrator | Do not require tty for all users ---------------------------------------- 0.63s 2025-08-29 20:33:13.903931 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.55s 2025-08-29 20:33:13.903938 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2025-08-29 20:33:13.903946 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-08-29 20:33:13.903953 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-08-29 20:33:13.903960 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-08-29 20:33:13.903968 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-08-29 20:33:13.903975 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-08-29 20:33:13.903983 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-08-29 20:33:13.903990 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-08-29 20:33:13.903998 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2025-08-29 20:33:14.182931 | orchestrator | + osism apply --environment custom facts 2025-08-29 20:33:15.937663 | orchestrator | 2025-08-29 20:33:15 | INFO  | Trying to run play facts in environment custom 2025-08-29 20:33:26.025305 | orchestrator | 2025-08-29 20:33:26 | INFO  | Task 88d49695-6e56-4ca8-b113-9af51db4a81a (facts) was prepared for execution. 2025-08-29 20:33:26.025405 | orchestrator | 2025-08-29 20:33:26 | INFO  | It takes a moment until task 88d49695-6e56-4ca8-b113-9af51db4a81a (facts) has been started and output is visible here. 2025-08-29 20:34:08.322067 | orchestrator | 2025-08-29 20:34:08.322231 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-08-29 20:34:08.322249 | orchestrator | 2025-08-29 20:34:08.322260 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 20:34:08.322271 | orchestrator | Friday 29 August 2025 20:33:29 +0000 (0:00:00.063) 0:00:00.063 ********* 2025-08-29 20:34:08.322282 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:08.322302 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:34:08.322314 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:34:08.322324 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:34:08.322336 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:08.322346 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:08.322356 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:08.322367 | orchestrator | 2025-08-29 20:34:08.322377 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-08-29 20:34:08.322407 | orchestrator | Friday 29 August 2025 20:33:30 +0000 (0:00:01.317) 0:00:01.380 ********* 2025-08-29 20:34:08.322424 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:08.322439 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:34:08.322453 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:34:08.322468 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:34:08.322488 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:08.322502 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:08.322516 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:08.322531 | orchestrator | 2025-08-29 20:34:08.322545 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-08-29 20:34:08.322560 | orchestrator | 2025-08-29 20:34:08.322576 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 20:34:08.322590 | orchestrator | Friday 29 August 2025 20:33:31 +0000 (0:00:01.097) 0:00:02.478 ********* 2025-08-29 20:34:08.322612 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:08.322637 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:08.322652 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:08.322667 | orchestrator | 2025-08-29 20:34:08.322682 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 20:34:08.322699 | orchestrator | Friday 29 August 2025 20:33:31 +0000 (0:00:00.076) 0:00:02.554 ********* 2025-08-29 20:34:08.322749 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:08.322767 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:08.322784 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:08.322797 | orchestrator | 2025-08-29 20:34:08.322809 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 20:34:08.322820 | orchestrator | Friday 29 August 2025 20:33:32 +0000 (0:00:00.162) 0:00:02.717 ********* 2025-08-29 20:34:08.322832 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:08.322843 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:08.322853 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:08.322864 | orchestrator | 2025-08-29 20:34:08.322876 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 20:34:08.322887 | orchestrator | Friday 29 August 2025 20:33:32 +0000 (0:00:00.160) 0:00:02.878 ********* 2025-08-29 20:34:08.322901 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:34:08.322920 | orchestrator | 2025-08-29 20:34:08.322937 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 20:34:08.322952 | orchestrator | Friday 29 August 2025 20:33:32 +0000 (0:00:00.109) 0:00:02.987 ********* 2025-08-29 20:34:08.322966 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:08.322982 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:08.322998 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:08.323015 | orchestrator | 2025-08-29 20:34:08.323032 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 20:34:08.323049 | orchestrator | Friday 29 August 2025 20:33:32 +0000 (0:00:00.401) 0:00:03.389 ********* 2025-08-29 20:34:08.323061 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:34:08.323071 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:34:08.323081 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:34:08.323091 | orchestrator | 2025-08-29 20:34:08.323100 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 20:34:08.323111 | orchestrator | Friday 29 August 2025 20:33:32 +0000 (0:00:00.099) 0:00:03.488 ********* 2025-08-29 20:34:08.323120 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:08.323131 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:08.323141 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:08.323151 | orchestrator | 2025-08-29 20:34:08.323182 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 20:34:08.323192 | orchestrator | Friday 29 August 2025 20:33:33 +0000 (0:00:00.988) 0:00:04.476 ********* 2025-08-29 20:34:08.323208 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:08.323225 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:08.323241 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:08.323256 | orchestrator | 2025-08-29 20:34:08.323272 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 20:34:08.323289 | orchestrator | Friday 29 August 2025 20:33:34 +0000 (0:00:00.434) 0:00:04.911 ********* 2025-08-29 20:34:08.323306 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:08.323323 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:08.323338 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:08.323352 | orchestrator | 2025-08-29 20:34:08.323362 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 20:34:08.323372 | orchestrator | Friday 29 August 2025 20:33:35 +0000 (0:00:01.004) 0:00:05.916 ********* 2025-08-29 20:34:08.323381 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:08.323391 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:08.323400 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:08.323410 | orchestrator | 2025-08-29 20:34:08.323420 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-08-29 20:34:08.323430 | orchestrator | Friday 29 August 2025 20:33:52 +0000 (0:00:17.010) 0:00:22.927 ********* 2025-08-29 20:34:08.323439 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:34:08.323459 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:34:08.323469 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:34:08.323479 | orchestrator | 2025-08-29 20:34:08.323488 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-08-29 20:34:08.323520 | orchestrator | Friday 29 August 2025 20:33:52 +0000 (0:00:00.088) 0:00:23.015 ********* 2025-08-29 20:34:08.323531 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:08.323540 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:08.323551 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:08.323560 | orchestrator | 2025-08-29 20:34:08.323570 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 20:34:08.323580 | orchestrator | Friday 29 August 2025 20:33:59 +0000 (0:00:06.954) 0:00:29.969 ********* 2025-08-29 20:34:08.323589 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:08.323599 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:08.323609 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:08.323618 | orchestrator | 2025-08-29 20:34:08.323628 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 20:34:08.323638 | orchestrator | Friday 29 August 2025 20:33:59 +0000 (0:00:00.380) 0:00:30.350 ********* 2025-08-29 20:34:08.323648 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-08-29 20:34:08.323657 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-08-29 20:34:08.323667 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-08-29 20:34:08.323683 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-08-29 20:34:08.323693 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-08-29 20:34:08.323703 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-08-29 20:34:08.323713 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-08-29 20:34:08.323722 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-08-29 20:34:08.323732 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-08-29 20:34:08.323741 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-08-29 20:34:08.323751 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-08-29 20:34:08.323761 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-08-29 20:34:08.323770 | orchestrator | 2025-08-29 20:34:08.323780 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 20:34:08.323790 | orchestrator | Friday 29 August 2025 20:34:03 +0000 (0:00:03.526) 0:00:33.877 ********* 2025-08-29 20:34:08.323799 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:08.323809 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:08.323819 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:08.323828 | orchestrator | 2025-08-29 20:34:08.323838 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 20:34:08.323848 | orchestrator | 2025-08-29 20:34:08.323857 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 20:34:08.323867 | orchestrator | Friday 29 August 2025 20:34:04 +0000 (0:00:01.272) 0:00:35.150 ********* 2025-08-29 20:34:08.323876 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:08.323886 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:08.323896 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:08.323905 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:08.323915 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:08.323924 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:08.323934 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:08.323943 | orchestrator | 2025-08-29 20:34:08.323953 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:34:08.323963 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:34:08.323974 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:34:08.323990 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:34:08.324000 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:34:08.324010 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:34:08.324020 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:34:08.324029 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:34:08.324039 | orchestrator | 2025-08-29 20:34:08.324049 | orchestrator | 2025-08-29 20:34:08.324059 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:34:08.324068 | orchestrator | Friday 29 August 2025 20:34:08 +0000 (0:00:03.837) 0:00:38.988 ********* 2025-08-29 20:34:08.324078 | orchestrator | =============================================================================== 2025-08-29 20:34:08.324088 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.01s 2025-08-29 20:34:08.324097 | orchestrator | Install required packages (Debian) -------------------------------------- 6.95s 2025-08-29 20:34:08.324107 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.84s 2025-08-29 20:34:08.324117 | orchestrator | Copy fact files --------------------------------------------------------- 3.53s 2025-08-29 20:34:08.324126 | orchestrator | Create custom facts directory ------------------------------------------- 1.32s 2025-08-29 20:34:08.324136 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.27s 2025-08-29 20:34:08.324152 | orchestrator | Copy fact file ---------------------------------------------------------- 1.10s 2025-08-29 20:34:08.528002 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.00s 2025-08-29 20:34:08.528104 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.99s 2025-08-29 20:34:08.528120 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.43s 2025-08-29 20:34:08.528132 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2025-08-29 20:34:08.528143 | orchestrator | Create custom facts directory ------------------------------------------- 0.38s 2025-08-29 20:34:08.528155 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2025-08-29 20:34:08.528210 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.16s 2025-08-29 20:34:08.528221 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2025-08-29 20:34:08.528233 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-08-29 20:34:08.528244 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-08-29 20:34:08.528256 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2025-08-29 20:34:08.794709 | orchestrator | + osism apply bootstrap 2025-08-29 20:34:20.736084 | orchestrator | 2025-08-29 20:34:20 | INFO  | Task b43454b9-4280-4b33-8371-a07e97430212 (bootstrap) was prepared for execution. 2025-08-29 20:34:20.736204 | orchestrator | 2025-08-29 20:34:20 | INFO  | It takes a moment until task b43454b9-4280-4b33-8371-a07e97430212 (bootstrap) has been started and output is visible here. 2025-08-29 20:34:35.762960 | orchestrator | 2025-08-29 20:34:35.763040 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-08-29 20:34:35.763052 | orchestrator | 2025-08-29 20:34:35.763062 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-08-29 20:34:35.763083 | orchestrator | Friday 29 August 2025 20:34:24 +0000 (0:00:00.121) 0:00:00.121 ********* 2025-08-29 20:34:35.763092 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:35.763101 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:35.763110 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:35.763117 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:35.763126 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:35.763134 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:35.763142 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:35.763150 | orchestrator | 2025-08-29 20:34:35.763166 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 20:34:35.763194 | orchestrator | 2025-08-29 20:34:35.763202 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 20:34:35.763210 | orchestrator | Friday 29 August 2025 20:34:24 +0000 (0:00:00.160) 0:00:00.282 ********* 2025-08-29 20:34:35.763218 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:35.763226 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:35.763234 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:35.763242 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:35.763250 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:35.763257 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:35.763265 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:35.763273 | orchestrator | 2025-08-29 20:34:35.763281 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-08-29 20:34:35.763289 | orchestrator | 2025-08-29 20:34:35.763297 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 20:34:35.763305 | orchestrator | Friday 29 August 2025 20:34:28 +0000 (0:00:03.628) 0:00:03.911 ********* 2025-08-29 20:34:35.763313 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 20:34:35.763322 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 20:34:35.763330 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-08-29 20:34:35.763337 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 20:34:35.763345 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 20:34:35.763353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 20:34:35.763361 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 20:34:35.763369 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-08-29 20:34:35.763377 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-08-29 20:34:35.763385 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 20:34:35.763393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 20:34:35.763401 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 20:34:35.763408 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 20:34:35.763416 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 20:34:35.763424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-08-29 20:34:35.763432 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 20:34:35.763440 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 20:34:35.763448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 20:34:35.763456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 20:34:35.763464 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:34:35.763471 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 20:34:35.763479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 20:34:35.763487 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 20:34:35.763495 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 20:34:35.763503 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-08-29 20:34:35.763516 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 20:34:35.763524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 20:34:35.763532 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-08-29 20:34:35.763541 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-08-29 20:34:35.763550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 20:34:35.763559 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 20:34:35.763568 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:34:35.763577 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 20:34:35.763586 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-08-29 20:34:35.763595 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-08-29 20:34:35.763604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 20:34:35.763613 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-08-29 20:34:35.763622 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 20:34:35.763634 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-08-29 20:34:35.763643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 20:34:35.763652 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:34:35.763661 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:34:35.763670 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 20:34:35.763679 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-08-29 20:34:35.763688 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:34:35.763697 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 20:34:35.763707 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 20:34:35.763727 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-08-29 20:34:35.763737 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 20:34:35.763746 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-08-29 20:34:35.763755 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-08-29 20:34:35.763764 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-08-29 20:34:35.763772 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:34:35.763780 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-08-29 20:34:35.763788 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-08-29 20:34:35.763796 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:34:35.763804 | orchestrator | 2025-08-29 20:34:35.763812 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-08-29 20:34:35.763820 | orchestrator | 2025-08-29 20:34:35.763828 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-08-29 20:34:35.763836 | orchestrator | Friday 29 August 2025 20:34:28 +0000 (0:00:00.383) 0:00:04.294 ********* 2025-08-29 20:34:35.763843 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:35.763851 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:35.763859 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:35.763867 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:35.763875 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:35.763883 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:35.763891 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:35.763899 | orchestrator | 2025-08-29 20:34:35.763906 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-08-29 20:34:35.763914 | orchestrator | Friday 29 August 2025 20:34:29 +0000 (0:00:01.166) 0:00:05.461 ********* 2025-08-29 20:34:35.763922 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:35.763930 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:35.763938 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:35.763946 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:35.763954 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:35.763966 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:35.763974 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:35.763981 | orchestrator | 2025-08-29 20:34:35.763989 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-08-29 20:34:35.763997 | orchestrator | Friday 29 August 2025 20:34:31 +0000 (0:00:01.300) 0:00:06.761 ********* 2025-08-29 20:34:35.764006 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:34:35.764015 | orchestrator | 2025-08-29 20:34:35.764024 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-08-29 20:34:35.764032 | orchestrator | Friday 29 August 2025 20:34:31 +0000 (0:00:00.264) 0:00:07.026 ********* 2025-08-29 20:34:35.764040 | orchestrator | changed: [testbed-manager] 2025-08-29 20:34:35.764048 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:34:35.764056 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:35.764064 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:34:35.764072 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:34:35.764079 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:35.764087 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:35.764095 | orchestrator | 2025-08-29 20:34:35.764103 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-08-29 20:34:35.764111 | orchestrator | Friday 29 August 2025 20:34:33 +0000 (0:00:01.977) 0:00:09.003 ********* 2025-08-29 20:34:35.764119 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:34:35.764128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:34:35.764137 | orchestrator | 2025-08-29 20:34:35.764145 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-08-29 20:34:35.764153 | orchestrator | Friday 29 August 2025 20:34:33 +0000 (0:00:00.256) 0:00:09.260 ********* 2025-08-29 20:34:35.764160 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:34:35.764182 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:34:35.764191 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:34:35.764199 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:35.764207 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:35.764214 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:35.764222 | orchestrator | 2025-08-29 20:34:35.764230 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-08-29 20:34:35.764238 | orchestrator | Friday 29 August 2025 20:34:34 +0000 (0:00:01.011) 0:00:10.272 ********* 2025-08-29 20:34:35.764246 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:34:35.764254 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:34:35.764262 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:34:35.764269 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:34:35.764277 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:35.764285 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:35.764292 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:35.764300 | orchestrator | 2025-08-29 20:34:35.764308 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-08-29 20:34:35.764316 | orchestrator | Friday 29 August 2025 20:34:35 +0000 (0:00:00.546) 0:00:10.818 ********* 2025-08-29 20:34:35.764324 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:34:35.764332 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:34:35.764339 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:34:35.764347 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:34:35.764355 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:34:35.764363 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:34:35.764371 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:35.764378 | orchestrator | 2025-08-29 20:34:35.764386 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 20:34:35.764399 | orchestrator | Friday 29 August 2025 20:34:35 +0000 (0:00:00.336) 0:00:11.154 ********* 2025-08-29 20:34:35.764407 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:34:35.764414 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:34:35.764426 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:34:46.181461 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:34:46.181588 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:34:46.181604 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:34:46.181617 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:34:46.181629 | orchestrator | 2025-08-29 20:34:46.181642 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 20:34:46.181693 | orchestrator | Friday 29 August 2025 20:34:35 +0000 (0:00:00.141) 0:00:11.296 ********* 2025-08-29 20:34:46.181707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:34:46.181738 | orchestrator | 2025-08-29 20:34:46.181750 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 20:34:46.181761 | orchestrator | Friday 29 August 2025 20:34:36 +0000 (0:00:00.238) 0:00:11.534 ********* 2025-08-29 20:34:46.181773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:34:46.181784 | orchestrator | 2025-08-29 20:34:46.181795 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 20:34:46.181806 | orchestrator | Friday 29 August 2025 20:34:36 +0000 (0:00:00.243) 0:00:11.778 ********* 2025-08-29 20:34:46.181817 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.181831 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:46.181842 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:46.181853 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:46.181863 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:46.181874 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:46.181885 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:46.181896 | orchestrator | 2025-08-29 20:34:46.181907 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 20:34:46.181918 | orchestrator | Friday 29 August 2025 20:34:37 +0000 (0:00:01.125) 0:00:12.903 ********* 2025-08-29 20:34:46.181929 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:34:46.181941 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:34:46.181952 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:34:46.181962 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:34:46.181973 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:34:46.181986 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:34:46.181998 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:34:46.182010 | orchestrator | 2025-08-29 20:34:46.182075 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 20:34:46.182088 | orchestrator | Friday 29 August 2025 20:34:37 +0000 (0:00:00.158) 0:00:13.062 ********* 2025-08-29 20:34:46.182100 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.182112 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:46.182124 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:46.182136 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:46.182148 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:46.182160 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:46.182191 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:46.182205 | orchestrator | 2025-08-29 20:34:46.182217 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 20:34:46.182229 | orchestrator | Friday 29 August 2025 20:34:38 +0000 (0:00:00.442) 0:00:13.505 ********* 2025-08-29 20:34:46.182241 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:34:46.182275 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:34:46.182288 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:34:46.182300 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:34:46.182312 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:34:46.182324 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:34:46.182336 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:34:46.182348 | orchestrator | 2025-08-29 20:34:46.182361 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 20:34:46.182373 | orchestrator | Friday 29 August 2025 20:34:38 +0000 (0:00:00.181) 0:00:13.686 ********* 2025-08-29 20:34:46.182384 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.182395 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:34:46.182448 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:34:46.182461 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:34:46.182472 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:46.182483 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:46.182493 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:46.182504 | orchestrator | 2025-08-29 20:34:46.182515 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 20:34:46.182526 | orchestrator | Friday 29 August 2025 20:34:38 +0000 (0:00:00.450) 0:00:14.137 ********* 2025-08-29 20:34:46.182536 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.182547 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:34:46.182558 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:34:46.182569 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:46.182579 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:34:46.182590 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:46.182600 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:46.182611 | orchestrator | 2025-08-29 20:34:46.182622 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 20:34:46.182637 | orchestrator | Friday 29 August 2025 20:34:39 +0000 (0:00:00.976) 0:00:15.114 ********* 2025-08-29 20:34:46.182648 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.182659 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:46.182669 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:46.182680 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:46.182691 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:46.182702 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:46.182712 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:46.182723 | orchestrator | 2025-08-29 20:34:46.182733 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 20:34:46.182744 | orchestrator | Friday 29 August 2025 20:34:40 +0000 (0:00:01.032) 0:00:16.147 ********* 2025-08-29 20:34:46.182774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:34:46.182786 | orchestrator | 2025-08-29 20:34:46.182797 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 20:34:46.182808 | orchestrator | Friday 29 August 2025 20:34:40 +0000 (0:00:00.291) 0:00:16.439 ********* 2025-08-29 20:34:46.182818 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:34:46.182829 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:34:46.182840 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:34:46.182850 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:34:46.182861 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:34:46.182872 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:34:46.182882 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:34:46.182893 | orchestrator | 2025-08-29 20:34:46.182904 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 20:34:46.182915 | orchestrator | Friday 29 August 2025 20:34:42 +0000 (0:00:01.154) 0:00:17.593 ********* 2025-08-29 20:34:46.182925 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.182945 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:46.182956 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:46.182967 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:46.182978 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:46.182988 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:46.182999 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:46.183010 | orchestrator | 2025-08-29 20:34:46.183020 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 20:34:46.183031 | orchestrator | Friday 29 August 2025 20:34:42 +0000 (0:00:00.173) 0:00:17.767 ********* 2025-08-29 20:34:46.183042 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.183053 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:46.183063 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:46.183074 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:46.183085 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:46.183095 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:46.183106 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:46.183117 | orchestrator | 2025-08-29 20:34:46.183128 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 20:34:46.183139 | orchestrator | Friday 29 August 2025 20:34:42 +0000 (0:00:00.169) 0:00:17.936 ********* 2025-08-29 20:34:46.183150 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.183160 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:46.183187 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:46.183199 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:46.183209 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:46.183220 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:46.183231 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:46.183241 | orchestrator | 2025-08-29 20:34:46.183252 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 20:34:46.183264 | orchestrator | Friday 29 August 2025 20:34:42 +0000 (0:00:00.152) 0:00:18.089 ********* 2025-08-29 20:34:46.183275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:34:46.183288 | orchestrator | 2025-08-29 20:34:46.183299 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 20:34:46.183310 | orchestrator | Friday 29 August 2025 20:34:42 +0000 (0:00:00.233) 0:00:18.322 ********* 2025-08-29 20:34:46.183321 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.183332 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:46.183342 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:46.183353 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:46.183364 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:46.183375 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:46.183385 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:46.183396 | orchestrator | 2025-08-29 20:34:46.183407 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 20:34:46.183418 | orchestrator | Friday 29 August 2025 20:34:43 +0000 (0:00:00.477) 0:00:18.800 ********* 2025-08-29 20:34:46.183429 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:34:46.183440 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:34:46.183450 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:34:46.183461 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:34:46.183472 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:34:46.183482 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:34:46.183493 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:34:46.183504 | orchestrator | 2025-08-29 20:34:46.183515 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 20:34:46.183526 | orchestrator | Friday 29 August 2025 20:34:43 +0000 (0:00:00.194) 0:00:18.995 ********* 2025-08-29 20:34:46.183537 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.183547 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:34:46.183558 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:34:46.183576 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:46.183587 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:46.183598 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:34:46.183609 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:46.183619 | orchestrator | 2025-08-29 20:34:46.183630 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 20:34:46.183641 | orchestrator | Friday 29 August 2025 20:34:44 +0000 (0:00:00.987) 0:00:19.982 ********* 2025-08-29 20:34:46.183657 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.183668 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:34:46.183678 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:34:46.183689 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:46.183700 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:34:46.183710 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:34:46.183721 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:34:46.183731 | orchestrator | 2025-08-29 20:34:46.183742 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 20:34:46.183753 | orchestrator | Friday 29 August 2025 20:34:45 +0000 (0:00:00.576) 0:00:20.559 ********* 2025-08-29 20:34:46.183764 | orchestrator | ok: [testbed-manager] 2025-08-29 20:34:46.183775 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:34:46.183785 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:34:46.183796 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:34:46.183814 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.985585 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.985712 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:35:24.985731 | orchestrator | 2025-08-29 20:35:24.985744 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 20:35:24.985757 | orchestrator | Friday 29 August 2025 20:34:46 +0000 (0:00:01.108) 0:00:21.667 ********* 2025-08-29 20:35:24.985769 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.985781 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.985792 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.985803 | orchestrator | changed: [testbed-manager] 2025-08-29 20:35:24.985814 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:35:24.985825 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:35:24.985835 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:35:24.985846 | orchestrator | 2025-08-29 20:35:24.985857 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-08-29 20:35:24.985868 | orchestrator | Friday 29 August 2025 20:35:02 +0000 (0:00:16.767) 0:00:38.435 ********* 2025-08-29 20:35:24.985879 | orchestrator | ok: [testbed-manager] 2025-08-29 20:35:24.985890 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:35:24.985901 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:35:24.985912 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:35:24.985922 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.985933 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.985944 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.985954 | orchestrator | 2025-08-29 20:35:24.985965 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-08-29 20:35:24.985976 | orchestrator | Friday 29 August 2025 20:35:03 +0000 (0:00:00.222) 0:00:38.658 ********* 2025-08-29 20:35:24.985987 | orchestrator | ok: [testbed-manager] 2025-08-29 20:35:24.985998 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:35:24.986009 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:35:24.986075 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:35:24.986087 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.986098 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.986110 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.986122 | orchestrator | 2025-08-29 20:35:24.986134 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-08-29 20:35:24.986146 | orchestrator | Friday 29 August 2025 20:35:03 +0000 (0:00:00.225) 0:00:38.883 ********* 2025-08-29 20:35:24.986159 | orchestrator | ok: [testbed-manager] 2025-08-29 20:35:24.986171 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:35:24.986224 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:35:24.986265 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:35:24.986278 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.986290 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.986302 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.986314 | orchestrator | 2025-08-29 20:35:24.986326 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-08-29 20:35:24.986338 | orchestrator | Friday 29 August 2025 20:35:03 +0000 (0:00:00.196) 0:00:39.080 ********* 2025-08-29 20:35:24.986351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:35:24.986366 | orchestrator | 2025-08-29 20:35:24.986378 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-08-29 20:35:24.986391 | orchestrator | Friday 29 August 2025 20:35:03 +0000 (0:00:00.269) 0:00:39.349 ********* 2025-08-29 20:35:24.986402 | orchestrator | ok: [testbed-manager] 2025-08-29 20:35:24.986414 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:35:24.986425 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:35:24.986437 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:35:24.986449 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.986460 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.986471 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.986481 | orchestrator | 2025-08-29 20:35:24.986492 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-08-29 20:35:24.986503 | orchestrator | Friday 29 August 2025 20:35:05 +0000 (0:00:01.682) 0:00:41.032 ********* 2025-08-29 20:35:24.986514 | orchestrator | changed: [testbed-manager] 2025-08-29 20:35:24.986525 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:35:24.986535 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:35:24.986546 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:35:24.986556 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:35:24.986567 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:35:24.986577 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:35:24.986588 | orchestrator | 2025-08-29 20:35:24.986599 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-08-29 20:35:24.986609 | orchestrator | Friday 29 August 2025 20:35:06 +0000 (0:00:01.090) 0:00:42.123 ********* 2025-08-29 20:35:24.986620 | orchestrator | ok: [testbed-manager] 2025-08-29 20:35:24.986630 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:35:24.986641 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:35:24.986651 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.986675 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:35:24.986686 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.986697 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.986708 | orchestrator | 2025-08-29 20:35:24.986719 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-08-29 20:35:24.986730 | orchestrator | Friday 29 August 2025 20:35:07 +0000 (0:00:00.802) 0:00:42.926 ********* 2025-08-29 20:35:24.986742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:35:24.986754 | orchestrator | 2025-08-29 20:35:24.986776 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-08-29 20:35:24.986788 | orchestrator | Friday 29 August 2025 20:35:07 +0000 (0:00:00.272) 0:00:43.198 ********* 2025-08-29 20:35:24.986799 | orchestrator | changed: [testbed-manager] 2025-08-29 20:35:24.986810 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:35:24.986820 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:35:24.986831 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:35:24.986842 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:35:24.986853 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:35:24.986864 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:35:24.986874 | orchestrator | 2025-08-29 20:35:24.986909 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-08-29 20:35:24.986921 | orchestrator | Friday 29 August 2025 20:35:08 +0000 (0:00:00.952) 0:00:44.151 ********* 2025-08-29 20:35:24.986932 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:35:24.986943 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:35:24.986954 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:35:24.986964 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:35:24.986975 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:35:24.986985 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:35:24.986996 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:35:24.987006 | orchestrator | 2025-08-29 20:35:24.987017 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-08-29 20:35:24.987028 | orchestrator | Friday 29 August 2025 20:35:08 +0000 (0:00:00.245) 0:00:44.396 ********* 2025-08-29 20:35:24.987039 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:35:24.987049 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:35:24.987060 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:35:24.987070 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:35:24.987081 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:35:24.987091 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:35:24.987102 | orchestrator | changed: [testbed-manager] 2025-08-29 20:35:24.987113 | orchestrator | 2025-08-29 20:35:24.987123 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-08-29 20:35:24.987134 | orchestrator | Friday 29 August 2025 20:35:19 +0000 (0:00:10.800) 0:00:55.196 ********* 2025-08-29 20:35:24.987145 | orchestrator | ok: [testbed-manager] 2025-08-29 20:35:24.987156 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:35:24.987166 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.987177 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:35:24.987220 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.987231 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:35:24.987242 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.987252 | orchestrator | 2025-08-29 20:35:24.987263 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-08-29 20:35:24.987274 | orchestrator | Friday 29 August 2025 20:35:20 +0000 (0:00:00.957) 0:00:56.154 ********* 2025-08-29 20:35:24.987285 | orchestrator | ok: [testbed-manager] 2025-08-29 20:35:24.987295 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:35:24.987306 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:35:24.987317 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.987327 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:35:24.987338 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.987348 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.987359 | orchestrator | 2025-08-29 20:35:24.987370 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-08-29 20:35:24.987381 | orchestrator | Friday 29 August 2025 20:35:21 +0000 (0:00:00.921) 0:00:57.075 ********* 2025-08-29 20:35:24.987391 | orchestrator | ok: [testbed-manager] 2025-08-29 20:35:24.987402 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:35:24.987413 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:35:24.987423 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:35:24.987446 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.987457 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.987468 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.987479 | orchestrator | 2025-08-29 20:35:24.987490 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-08-29 20:35:24.987501 | orchestrator | Friday 29 August 2025 20:35:21 +0000 (0:00:00.215) 0:00:57.290 ********* 2025-08-29 20:35:24.987512 | orchestrator | ok: [testbed-manager] 2025-08-29 20:35:24.987523 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:35:24.987533 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:35:24.987544 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:35:24.987555 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.987565 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.987576 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.987594 | orchestrator | 2025-08-29 20:35:24.987623 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-08-29 20:35:24.987635 | orchestrator | Friday 29 August 2025 20:35:22 +0000 (0:00:00.212) 0:00:57.503 ********* 2025-08-29 20:35:24.987646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:35:24.987658 | orchestrator | 2025-08-29 20:35:24.987668 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-08-29 20:35:24.987679 | orchestrator | Friday 29 August 2025 20:35:22 +0000 (0:00:00.257) 0:00:57.761 ********* 2025-08-29 20:35:24.987690 | orchestrator | ok: [testbed-manager] 2025-08-29 20:35:24.987701 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:35:24.987712 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:35:24.987722 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:35:24.987733 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.987743 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.987754 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.987765 | orchestrator | 2025-08-29 20:35:24.987776 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-08-29 20:35:24.987787 | orchestrator | Friday 29 August 2025 20:35:24 +0000 (0:00:01.860) 0:00:59.621 ********* 2025-08-29 20:35:24.987797 | orchestrator | changed: [testbed-manager] 2025-08-29 20:35:24.987808 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:35:24.987819 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:35:24.987830 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:35:24.987840 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:35:24.987856 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:35:24.987867 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:35:24.987878 | orchestrator | 2025-08-29 20:35:24.987889 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-08-29 20:35:24.987900 | orchestrator | Friday 29 August 2025 20:35:24 +0000 (0:00:00.608) 0:01:00.229 ********* 2025-08-29 20:35:24.987911 | orchestrator | ok: [testbed-manager] 2025-08-29 20:35:24.987922 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:35:24.987933 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:35:24.987944 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:35:24.987954 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:35:24.987965 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:35:24.987976 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:35:24.987986 | orchestrator | 2025-08-29 20:35:24.988004 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-08-29 20:37:50.420346 | orchestrator | Friday 29 August 2025 20:35:24 +0000 (0:00:00.244) 0:01:00.474 ********* 2025-08-29 20:37:50.420467 | orchestrator | ok: [testbed-manager] 2025-08-29 20:37:50.420484 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:37:50.420496 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:37:50.420507 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:37:50.420518 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:37:50.420529 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:37:50.420540 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:37:50.420551 | orchestrator | 2025-08-29 20:37:50.420563 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-08-29 20:37:50.420574 | orchestrator | Friday 29 August 2025 20:35:26 +0000 (0:00:01.500) 0:01:01.975 ********* 2025-08-29 20:37:50.420586 | orchestrator | changed: [testbed-manager] 2025-08-29 20:37:50.420597 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:37:50.420608 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:37:50.420619 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:37:50.420630 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:37:50.420641 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:37:50.420652 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:37:50.420663 | orchestrator | 2025-08-29 20:37:50.420674 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-08-29 20:37:50.420710 | orchestrator | Friday 29 August 2025 20:35:28 +0000 (0:00:02.103) 0:01:04.078 ********* 2025-08-29 20:37:50.420722 | orchestrator | ok: [testbed-manager] 2025-08-29 20:37:50.420733 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:37:50.420744 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:37:50.420755 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:37:50.420766 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:37:50.420776 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:37:50.420787 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:37:50.420798 | orchestrator | 2025-08-29 20:37:50.420809 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-08-29 20:37:50.420820 | orchestrator | Friday 29 August 2025 20:35:31 +0000 (0:00:02.675) 0:01:06.754 ********* 2025-08-29 20:37:50.420831 | orchestrator | ok: [testbed-manager] 2025-08-29 20:37:50.420844 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:37:50.420856 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:37:50.420868 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:37:50.420880 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:37:50.420892 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:37:50.420905 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:37:50.420917 | orchestrator | 2025-08-29 20:37:50.420929 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-08-29 20:37:50.420942 | orchestrator | Friday 29 August 2025 20:36:13 +0000 (0:00:42.431) 0:01:49.185 ********* 2025-08-29 20:37:50.420955 | orchestrator | changed: [testbed-manager] 2025-08-29 20:37:50.420968 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:37:50.420980 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:37:50.420992 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:37:50.421004 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:37:50.421017 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:37:50.421029 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:37:50.421042 | orchestrator | 2025-08-29 20:37:50.421054 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-08-29 20:37:50.421066 | orchestrator | Friday 29 August 2025 20:37:32 +0000 (0:01:18.653) 0:03:07.839 ********* 2025-08-29 20:37:50.421078 | orchestrator | ok: [testbed-manager] 2025-08-29 20:37:50.421091 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:37:50.421103 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:37:50.421115 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:37:50.421127 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:37:50.421139 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:37:50.421151 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:37:50.421163 | orchestrator | 2025-08-29 20:37:50.421175 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-08-29 20:37:50.421189 | orchestrator | Friday 29 August 2025 20:37:34 +0000 (0:00:01.862) 0:03:09.701 ********* 2025-08-29 20:37:50.421200 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:37:50.421210 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:37:50.421284 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:37:50.421296 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:37:50.421307 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:37:50.421318 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:37:50.421336 | orchestrator | changed: [testbed-manager] 2025-08-29 20:37:50.421355 | orchestrator | 2025-08-29 20:37:50.421373 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-08-29 20:37:50.421391 | orchestrator | Friday 29 August 2025 20:37:45 +0000 (0:00:11.347) 0:03:21.049 ********* 2025-08-29 20:37:50.421432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-08-29 20:37:50.421477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-08-29 20:37:50.421546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-08-29 20:37:50.421562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-08-29 20:37:50.421574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-08-29 20:37:50.421585 | orchestrator | 2025-08-29 20:37:50.421596 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-08-29 20:37:50.421607 | orchestrator | Friday 29 August 2025 20:37:45 +0000 (0:00:00.346) 0:03:21.395 ********* 2025-08-29 20:37:50.421618 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 20:37:50.421629 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:37:50.421639 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 20:37:50.421650 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:37:50.421661 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 20:37:50.421672 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:37:50.421683 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 20:37:50.421693 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:37:50.421704 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 20:37:50.421715 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 20:37:50.421726 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 20:37:50.421737 | orchestrator | 2025-08-29 20:37:50.421747 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-08-29 20:37:50.421758 | orchestrator | Friday 29 August 2025 20:37:46 +0000 (0:00:00.644) 0:03:22.040 ********* 2025-08-29 20:37:50.421769 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 20:37:50.421781 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 20:37:50.421792 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 20:37:50.421803 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 20:37:50.421814 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 20:37:50.421824 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 20:37:50.421842 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 20:37:50.421853 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 20:37:50.421864 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 20:37:50.421875 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 20:37:50.421886 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:37:50.421897 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 20:37:50.421907 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 20:37:50.421918 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 20:37:50.421929 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 20:37:50.421940 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 20:37:50.421958 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 20:37:50.421976 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 20:37:50.421993 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 20:37:50.422010 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 20:37:50.422100 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 20:37:50.422119 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:37:50.422150 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 20:37:55.344625 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 20:37:55.344739 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 20:37:55.344755 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 20:37:55.344766 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 20:37:55.344778 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 20:37:55.344789 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 20:37:55.344800 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 20:37:55.344811 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 20:37:55.344823 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 20:37:55.344834 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 20:37:55.344844 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 20:37:55.344856 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 20:37:55.344867 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 20:37:55.344878 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 20:37:55.344888 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 20:37:55.344899 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 20:37:55.344934 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 20:37:55.344945 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 20:37:55.344957 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 20:37:55.344968 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:37:55.344980 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:37:55.344991 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 20:37:55.345002 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 20:37:55.345013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 20:37:55.345024 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 20:37:55.345035 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 20:37:55.345045 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 20:37:55.345056 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 20:37:55.345067 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 20:37:55.345078 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 20:37:55.345089 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 20:37:55.345100 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 20:37:55.345111 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 20:37:55.345121 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 20:37:55.345132 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 20:37:55.345143 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 20:37:55.345175 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 20:37:55.345188 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 20:37:55.345201 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 20:37:55.345214 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 20:37:55.345257 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 20:37:55.345270 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 20:37:55.345300 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 20:37:55.345313 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 20:37:55.345325 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 20:37:55.345337 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 20:37:55.345349 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 20:37:55.345362 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 20:37:55.345374 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 20:37:55.345395 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 20:37:55.345407 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 20:37:55.345420 | orchestrator | 2025-08-29 20:37:55.345433 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-08-29 20:37:55.345445 | orchestrator | Friday 29 August 2025 20:37:50 +0000 (0:00:03.862) 0:03:25.903 ********* 2025-08-29 20:37:55.345457 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 20:37:55.345470 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 20:37:55.345482 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 20:37:55.345494 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 20:37:55.345507 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 20:37:55.345518 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 20:37:55.345533 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 20:37:55.345545 | orchestrator | 2025-08-29 20:37:55.345555 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-08-29 20:37:55.345566 | orchestrator | Friday 29 August 2025 20:37:51 +0000 (0:00:01.443) 0:03:27.347 ********* 2025-08-29 20:37:55.345577 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 20:37:55.345588 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 20:37:55.345599 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:37:55.345610 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 20:37:55.345621 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:37:55.345632 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 20:37:55.345643 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:37:55.345654 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:37:55.345665 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 20:37:55.345676 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 20:37:55.345687 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 20:37:55.345698 | orchestrator | 2025-08-29 20:37:55.345709 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-08-29 20:37:55.345720 | orchestrator | Friday 29 August 2025 20:37:53 +0000 (0:00:01.603) 0:03:28.951 ********* 2025-08-29 20:37:55.345731 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 20:37:55.345742 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 20:37:55.345753 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:37:55.345764 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 20:37:55.345775 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:37:55.345786 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:37:55.345797 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 20:37:55.345808 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:37:55.345818 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 20:37:55.345835 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 20:37:55.345852 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 20:37:55.345863 | orchestrator | 2025-08-29 20:37:55.345874 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-08-29 20:37:55.345885 | orchestrator | Friday 29 August 2025 20:37:55 +0000 (0:00:01.599) 0:03:30.550 ********* 2025-08-29 20:37:55.345896 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:37:55.345907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:37:55.345918 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:37:55.345929 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:37:55.345940 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:37:55.345950 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:37:55.345968 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:38:07.295724 | orchestrator | 2025-08-29 20:38:07.295842 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-08-29 20:38:07.295860 | orchestrator | Friday 29 August 2025 20:37:55 +0000 (0:00:00.285) 0:03:30.835 ********* 2025-08-29 20:38:07.295872 | orchestrator | ok: [testbed-manager] 2025-08-29 20:38:07.295885 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:38:07.295896 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:38:07.295907 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:38:07.295919 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:38:07.295930 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:38:07.295940 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:38:07.295952 | orchestrator | 2025-08-29 20:38:07.295963 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-08-29 20:38:07.295974 | orchestrator | Friday 29 August 2025 20:38:01 +0000 (0:00:05.917) 0:03:36.753 ********* 2025-08-29 20:38:07.295986 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-08-29 20:38:07.295997 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-08-29 20:38:07.296008 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:38:07.296020 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-08-29 20:38:07.296031 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:38:07.296042 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-08-29 20:38:07.296053 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:38:07.296064 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:38:07.296075 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-08-29 20:38:07.296086 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:38:07.296097 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-08-29 20:38:07.296108 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:38:07.296120 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-08-29 20:38:07.296131 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:38:07.296142 | orchestrator | 2025-08-29 20:38:07.296153 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-08-29 20:38:07.296164 | orchestrator | Friday 29 August 2025 20:38:01 +0000 (0:00:00.299) 0:03:37.053 ********* 2025-08-29 20:38:07.296175 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-08-29 20:38:07.296186 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-08-29 20:38:07.296197 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-08-29 20:38:07.296208 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-08-29 20:38:07.296263 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-08-29 20:38:07.296279 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-08-29 20:38:07.296292 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-08-29 20:38:07.296304 | orchestrator | 2025-08-29 20:38:07.296317 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-08-29 20:38:07.296330 | orchestrator | Friday 29 August 2025 20:38:02 +0000 (0:00:01.190) 0:03:38.243 ********* 2025-08-29 20:38:07.296344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:38:07.296382 | orchestrator | 2025-08-29 20:38:07.296395 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-08-29 20:38:07.296408 | orchestrator | Friday 29 August 2025 20:38:03 +0000 (0:00:00.505) 0:03:38.749 ********* 2025-08-29 20:38:07.296421 | orchestrator | ok: [testbed-manager] 2025-08-29 20:38:07.296433 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:38:07.296445 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:38:07.296458 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:38:07.296471 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:38:07.296483 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:38:07.296495 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:38:07.296508 | orchestrator | 2025-08-29 20:38:07.296521 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-08-29 20:38:07.296532 | orchestrator | Friday 29 August 2025 20:38:04 +0000 (0:00:01.273) 0:03:40.022 ********* 2025-08-29 20:38:07.296543 | orchestrator | ok: [testbed-manager] 2025-08-29 20:38:07.296553 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:38:07.296564 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:38:07.296575 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:38:07.296585 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:38:07.296596 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:38:07.296607 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:38:07.296617 | orchestrator | 2025-08-29 20:38:07.296628 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-08-29 20:38:07.296639 | orchestrator | Friday 29 August 2025 20:38:05 +0000 (0:00:00.585) 0:03:40.608 ********* 2025-08-29 20:38:07.296650 | orchestrator | changed: [testbed-manager] 2025-08-29 20:38:07.296661 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:38:07.296672 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:38:07.296682 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:38:07.296693 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:38:07.296704 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:38:07.296715 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:38:07.296725 | orchestrator | 2025-08-29 20:38:07.296736 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-08-29 20:38:07.296764 | orchestrator | Friday 29 August 2025 20:38:05 +0000 (0:00:00.592) 0:03:41.200 ********* 2025-08-29 20:38:07.296775 | orchestrator | ok: [testbed-manager] 2025-08-29 20:38:07.296786 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:38:07.296797 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:38:07.296808 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:38:07.296819 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:38:07.296830 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:38:07.296841 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:38:07.296852 | orchestrator | 2025-08-29 20:38:07.296862 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-08-29 20:38:07.296873 | orchestrator | Friday 29 August 2025 20:38:06 +0000 (0:00:00.596) 0:03:41.797 ********* 2025-08-29 20:38:07.296905 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756498528.296713, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:07.296921 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756498548.8679798, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:07.296942 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756498562.7634373, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:07.296954 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756498557.0200603, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:07.296966 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756498563.1354852, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:07.296978 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756498563.1158824, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:07.296989 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756498562.0610833, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:07.297009 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:32.155817 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:32.155941 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:32.155952 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:32.155974 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:32.155981 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:32.155991 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 20:38:32.155999 | orchestrator | 2025-08-29 20:38:32.156008 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-08-29 20:38:32.156017 | orchestrator | Friday 29 August 2025 20:38:07 +0000 (0:00:00.974) 0:03:42.771 ********* 2025-08-29 20:38:32.156024 | orchestrator | changed: [testbed-manager] 2025-08-29 20:38:32.156032 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:38:32.156039 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:38:32.156046 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:38:32.156053 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:38:32.156060 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:38:32.156067 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:38:32.156074 | orchestrator | 2025-08-29 20:38:32.156082 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-08-29 20:38:32.156089 | orchestrator | Friday 29 August 2025 20:38:08 +0000 (0:00:01.112) 0:03:43.884 ********* 2025-08-29 20:38:32.156102 | orchestrator | changed: [testbed-manager] 2025-08-29 20:38:32.156110 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:38:32.156117 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:38:32.156124 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:38:32.156145 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:38:32.156153 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:38:32.156160 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:38:32.156167 | orchestrator | 2025-08-29 20:38:32.156174 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-08-29 20:38:32.156182 | orchestrator | Friday 29 August 2025 20:38:09 +0000 (0:00:01.166) 0:03:45.050 ********* 2025-08-29 20:38:32.156189 | orchestrator | changed: [testbed-manager] 2025-08-29 20:38:32.156196 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:38:32.156203 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:38:32.156210 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:38:32.156217 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:38:32.156224 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:38:32.156282 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:38:32.156290 | orchestrator | 2025-08-29 20:38:32.156297 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-08-29 20:38:32.156304 | orchestrator | Friday 29 August 2025 20:38:10 +0000 (0:00:01.128) 0:03:46.179 ********* 2025-08-29 20:38:32.156311 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:38:32.156317 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:38:32.156324 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:38:32.156331 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:38:32.156338 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:38:32.156345 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:38:32.156352 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:38:32.156359 | orchestrator | 2025-08-29 20:38:32.156366 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-08-29 20:38:32.156376 | orchestrator | Friday 29 August 2025 20:38:10 +0000 (0:00:00.276) 0:03:46.455 ********* 2025-08-29 20:38:32.156384 | orchestrator | ok: [testbed-manager] 2025-08-29 20:38:32.156394 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:38:32.156405 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:38:32.156413 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:38:32.156422 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:38:32.156432 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:38:32.156440 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:38:32.156451 | orchestrator | 2025-08-29 20:38:32.156458 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-08-29 20:38:32.156464 | orchestrator | Friday 29 August 2025 20:38:11 +0000 (0:00:00.709) 0:03:47.165 ********* 2025-08-29 20:38:32.156472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:38:32.156481 | orchestrator | 2025-08-29 20:38:32.156488 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-08-29 20:38:32.156497 | orchestrator | Friday 29 August 2025 20:38:12 +0000 (0:00:00.373) 0:03:47.539 ********* 2025-08-29 20:38:32.156511 | orchestrator | ok: [testbed-manager] 2025-08-29 20:38:32.156525 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:38:32.156533 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:38:32.156542 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:38:32.156550 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:38:32.156559 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:38:32.156567 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:38:32.156581 | orchestrator | 2025-08-29 20:38:32.156590 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-08-29 20:38:32.156599 | orchestrator | Friday 29 August 2025 20:38:20 +0000 (0:00:08.382) 0:03:55.921 ********* 2025-08-29 20:38:32.156608 | orchestrator | ok: [testbed-manager] 2025-08-29 20:38:32.156624 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:38:32.156633 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:38:32.156642 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:38:32.156656 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:38:32.156667 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:38:32.156675 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:38:32.156684 | orchestrator | 2025-08-29 20:38:32.156691 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-08-29 20:38:32.156698 | orchestrator | Friday 29 August 2025 20:38:21 +0000 (0:00:01.269) 0:03:57.191 ********* 2025-08-29 20:38:32.156709 | orchestrator | ok: [testbed-manager] 2025-08-29 20:38:32.156722 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:38:32.156727 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:38:32.156733 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:38:32.156739 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:38:32.156744 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:38:32.156751 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:38:32.156757 | orchestrator | 2025-08-29 20:38:32.156763 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-08-29 20:38:32.156768 | orchestrator | Friday 29 August 2025 20:38:22 +0000 (0:00:01.038) 0:03:58.229 ********* 2025-08-29 20:38:32.156778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:38:32.156785 | orchestrator | 2025-08-29 20:38:32.156792 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-08-29 20:38:32.156798 | orchestrator | Friday 29 August 2025 20:38:23 +0000 (0:00:00.486) 0:03:58.715 ********* 2025-08-29 20:38:32.156805 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:38:32.156813 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:38:32.156820 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:38:32.156827 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:38:32.156834 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:38:32.156841 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:38:32.156848 | orchestrator | changed: [testbed-manager] 2025-08-29 20:38:32.156855 | orchestrator | 2025-08-29 20:38:32.156862 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-08-29 20:38:32.156870 | orchestrator | Friday 29 August 2025 20:38:31 +0000 (0:00:08.310) 0:04:07.026 ********* 2025-08-29 20:38:32.156877 | orchestrator | changed: [testbed-manager] 2025-08-29 20:38:32.156884 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:38:32.156891 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:38:32.156905 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:39:41.798912 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:39:41.799033 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:39:41.799048 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:39:41.799061 | orchestrator | 2025-08-29 20:39:41.799074 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-08-29 20:39:41.799087 | orchestrator | Friday 29 August 2025 20:38:32 +0000 (0:00:00.616) 0:04:07.643 ********* 2025-08-29 20:39:41.799098 | orchestrator | changed: [testbed-manager] 2025-08-29 20:39:41.799109 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:39:41.799120 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:39:41.799131 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:39:41.799142 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:39:41.799153 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:39:41.799164 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:39:41.799175 | orchestrator | 2025-08-29 20:39:41.799186 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-08-29 20:39:41.799197 | orchestrator | Friday 29 August 2025 20:38:33 +0000 (0:00:01.083) 0:04:08.727 ********* 2025-08-29 20:39:41.799208 | orchestrator | changed: [testbed-manager] 2025-08-29 20:39:41.799219 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:39:41.799301 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:39:41.799315 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:39:41.799325 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:39:41.799336 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:39:41.799347 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:39:41.799357 | orchestrator | 2025-08-29 20:39:41.799369 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-08-29 20:39:41.799379 | orchestrator | Friday 29 August 2025 20:38:34 +0000 (0:00:01.055) 0:04:09.782 ********* 2025-08-29 20:39:41.799390 | orchestrator | ok: [testbed-manager] 2025-08-29 20:39:41.799402 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:39:41.799413 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:39:41.799424 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:39:41.799434 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:39:41.799447 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:39:41.799459 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:39:41.799471 | orchestrator | 2025-08-29 20:39:41.799484 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-08-29 20:39:41.799497 | orchestrator | Friday 29 August 2025 20:38:34 +0000 (0:00:00.310) 0:04:10.092 ********* 2025-08-29 20:39:41.799510 | orchestrator | ok: [testbed-manager] 2025-08-29 20:39:41.799522 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:39:41.799534 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:39:41.799545 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:39:41.799555 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:39:41.799566 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:39:41.799577 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:39:41.799588 | orchestrator | 2025-08-29 20:39:41.799599 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-08-29 20:39:41.799610 | orchestrator | Friday 29 August 2025 20:38:34 +0000 (0:00:00.315) 0:04:10.408 ********* 2025-08-29 20:39:41.799621 | orchestrator | ok: [testbed-manager] 2025-08-29 20:39:41.799631 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:39:41.799642 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:39:41.799653 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:39:41.799663 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:39:41.799674 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:39:41.799684 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:39:41.799695 | orchestrator | 2025-08-29 20:39:41.799706 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-08-29 20:39:41.799717 | orchestrator | Friday 29 August 2025 20:38:35 +0000 (0:00:00.286) 0:04:10.694 ********* 2025-08-29 20:39:41.799728 | orchestrator | ok: [testbed-manager] 2025-08-29 20:39:41.799738 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:39:41.799749 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:39:41.799759 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:39:41.799770 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:39:41.799780 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:39:41.799791 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:39:41.799802 | orchestrator | 2025-08-29 20:39:41.799812 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-08-29 20:39:41.799823 | orchestrator | Friday 29 August 2025 20:38:40 +0000 (0:00:05.719) 0:04:16.414 ********* 2025-08-29 20:39:41.799835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:39:41.799848 | orchestrator | 2025-08-29 20:39:41.799859 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-08-29 20:39:41.799870 | orchestrator | Friday 29 August 2025 20:38:41 +0000 (0:00:00.379) 0:04:16.793 ********* 2025-08-29 20:39:41.799881 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-08-29 20:39:41.799892 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-08-29 20:39:41.799903 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-08-29 20:39:41.799937 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-08-29 20:39:41.799949 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:39:41.799960 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:39:41.799971 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-08-29 20:39:41.799982 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-08-29 20:39:41.799993 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-08-29 20:39:41.800003 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-08-29 20:39:41.800014 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:39:41.800025 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-08-29 20:39:41.800036 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:39:41.800046 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-08-29 20:39:41.800057 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-08-29 20:39:41.800068 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-08-29 20:39:41.800079 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:39:41.800106 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:39:41.800118 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-08-29 20:39:41.800129 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-08-29 20:39:41.800140 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:39:41.800150 | orchestrator | 2025-08-29 20:39:41.800161 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-08-29 20:39:41.800172 | orchestrator | Friday 29 August 2025 20:38:41 +0000 (0:00:00.341) 0:04:17.134 ********* 2025-08-29 20:39:41.800183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:39:41.800195 | orchestrator | 2025-08-29 20:39:41.800205 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-08-29 20:39:41.800216 | orchestrator | Friday 29 August 2025 20:38:42 +0000 (0:00:00.407) 0:04:17.542 ********* 2025-08-29 20:39:41.800227 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-08-29 20:39:41.800238 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-08-29 20:39:41.800276 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:39:41.800289 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:39:41.800299 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-08-29 20:39:41.800310 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-08-29 20:39:41.800321 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:39:41.800331 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-08-29 20:39:41.800342 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:39:41.800353 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:39:41.800364 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-08-29 20:39:41.800374 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:39:41.800385 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-08-29 20:39:41.800396 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:39:41.800407 | orchestrator | 2025-08-29 20:39:41.800417 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-08-29 20:39:41.800429 | orchestrator | Friday 29 August 2025 20:38:42 +0000 (0:00:00.315) 0:04:17.857 ********* 2025-08-29 20:39:41.800440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:39:41.800451 | orchestrator | 2025-08-29 20:39:41.800462 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-08-29 20:39:41.800480 | orchestrator | Friday 29 August 2025 20:38:42 +0000 (0:00:00.483) 0:04:18.340 ********* 2025-08-29 20:39:41.800490 | orchestrator | changed: [testbed-manager] 2025-08-29 20:39:41.800501 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:39:41.800512 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:39:41.800523 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:39:41.800534 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:39:41.800544 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:39:41.800555 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:39:41.800566 | orchestrator | 2025-08-29 20:39:41.800577 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-08-29 20:39:41.800588 | orchestrator | Friday 29 August 2025 20:39:18 +0000 (0:00:35.227) 0:04:53.568 ********* 2025-08-29 20:39:41.800598 | orchestrator | changed: [testbed-manager] 2025-08-29 20:39:41.800609 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:39:41.800620 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:39:41.800630 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:39:41.800641 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:39:41.800652 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:39:41.800663 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:39:41.800673 | orchestrator | 2025-08-29 20:39:41.800684 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-08-29 20:39:41.800695 | orchestrator | Friday 29 August 2025 20:39:26 +0000 (0:00:08.268) 0:05:01.837 ********* 2025-08-29 20:39:41.800706 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:39:41.800717 | orchestrator | changed: [testbed-manager] 2025-08-29 20:39:41.800727 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:39:41.800738 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:39:41.800749 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:39:41.800759 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:39:41.800770 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:39:41.800780 | orchestrator | 2025-08-29 20:39:41.800791 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-08-29 20:39:41.800803 | orchestrator | Friday 29 August 2025 20:39:34 +0000 (0:00:07.834) 0:05:09.671 ********* 2025-08-29 20:39:41.800813 | orchestrator | ok: [testbed-manager] 2025-08-29 20:39:41.800824 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:39:41.800835 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:39:41.800846 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:39:41.800856 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:39:41.800867 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:39:41.800878 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:39:41.800889 | orchestrator | 2025-08-29 20:39:41.800900 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-08-29 20:39:41.800911 | orchestrator | Friday 29 August 2025 20:39:35 +0000 (0:00:01.756) 0:05:11.427 ********* 2025-08-29 20:39:41.800922 | orchestrator | changed: [testbed-manager] 2025-08-29 20:39:41.800932 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:39:41.800943 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:39:41.800954 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:39:41.800965 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:39:41.800976 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:39:41.800986 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:39:41.800997 | orchestrator | 2025-08-29 20:39:41.801008 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-08-29 20:39:41.801026 | orchestrator | Friday 29 August 2025 20:39:41 +0000 (0:00:05.853) 0:05:17.280 ********* 2025-08-29 20:39:53.109566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:39:53.109672 | orchestrator | 2025-08-29 20:39:53.109683 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-08-29 20:39:53.109710 | orchestrator | Friday 29 August 2025 20:39:42 +0000 (0:00:00.397) 0:05:17.678 ********* 2025-08-29 20:39:53.109716 | orchestrator | changed: [testbed-manager] 2025-08-29 20:39:53.109723 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:39:53.109729 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:39:53.109735 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:39:53.109741 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:39:53.109746 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:39:53.109752 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:39:53.109757 | orchestrator | 2025-08-29 20:39:53.109764 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-08-29 20:39:53.109769 | orchestrator | Friday 29 August 2025 20:39:43 +0000 (0:00:00.835) 0:05:18.514 ********* 2025-08-29 20:39:53.109775 | orchestrator | ok: [testbed-manager] 2025-08-29 20:39:53.109782 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:39:53.109789 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:39:53.109794 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:39:53.109800 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:39:53.109805 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:39:53.109810 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:39:53.109816 | orchestrator | 2025-08-29 20:39:53.109821 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-08-29 20:39:53.109827 | orchestrator | Friday 29 August 2025 20:39:44 +0000 (0:00:01.950) 0:05:20.464 ********* 2025-08-29 20:39:53.109833 | orchestrator | changed: [testbed-manager] 2025-08-29 20:39:53.109839 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:39:53.109844 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:39:53.109850 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:39:53.109856 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:39:53.109862 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:39:53.109867 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:39:53.109873 | orchestrator | 2025-08-29 20:39:53.109879 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-08-29 20:39:53.109885 | orchestrator | Friday 29 August 2025 20:39:45 +0000 (0:00:00.782) 0:05:21.246 ********* 2025-08-29 20:39:53.109891 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:39:53.109896 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:39:53.109902 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:39:53.109907 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:39:53.109913 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:39:53.109919 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:39:53.109925 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:39:53.109931 | orchestrator | 2025-08-29 20:39:53.109937 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-08-29 20:39:53.109943 | orchestrator | Friday 29 August 2025 20:39:45 +0000 (0:00:00.250) 0:05:21.497 ********* 2025-08-29 20:39:53.109948 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:39:53.109954 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:39:53.109959 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:39:53.109965 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:39:53.109971 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:39:53.109976 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:39:53.109982 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:39:53.109988 | orchestrator | 2025-08-29 20:39:53.109994 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-08-29 20:39:53.109999 | orchestrator | Friday 29 August 2025 20:39:46 +0000 (0:00:00.385) 0:05:21.883 ********* 2025-08-29 20:39:53.110006 | orchestrator | ok: [testbed-manager] 2025-08-29 20:39:53.110011 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:39:53.110064 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:39:53.110070 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:39:53.110076 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:39:53.110081 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:39:53.110087 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:39:53.110099 | orchestrator | 2025-08-29 20:39:53.110119 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-08-29 20:39:53.110125 | orchestrator | Friday 29 August 2025 20:39:46 +0000 (0:00:00.283) 0:05:22.167 ********* 2025-08-29 20:39:53.110131 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:39:53.110137 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:39:53.110143 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:39:53.110149 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:39:53.110155 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:39:53.110161 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:39:53.110167 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:39:53.110173 | orchestrator | 2025-08-29 20:39:53.110179 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-08-29 20:39:53.110190 | orchestrator | Friday 29 August 2025 20:39:46 +0000 (0:00:00.240) 0:05:22.407 ********* 2025-08-29 20:39:53.110196 | orchestrator | ok: [testbed-manager] 2025-08-29 20:39:53.110202 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:39:53.110208 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:39:53.110214 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:39:53.110220 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:39:53.110226 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:39:53.110232 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:39:53.110238 | orchestrator | 2025-08-29 20:39:53.110243 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-08-29 20:39:53.110250 | orchestrator | Friday 29 August 2025 20:39:47 +0000 (0:00:00.308) 0:05:22.716 ********* 2025-08-29 20:39:53.110274 | orchestrator | ok: [testbed-manager] =>  2025-08-29 20:39:53.110280 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 20:39:53.110286 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 20:39:53.110291 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 20:39:53.110297 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 20:39:53.110302 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 20:39:53.110308 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 20:39:53.110313 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 20:39:53.110319 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 20:39:53.110325 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 20:39:53.110345 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 20:39:53.110351 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 20:39:53.110357 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 20:39:53.110362 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 20:39:53.110368 | orchestrator | 2025-08-29 20:39:53.110374 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-08-29 20:39:53.110380 | orchestrator | Friday 29 August 2025 20:39:47 +0000 (0:00:00.261) 0:05:22.977 ********* 2025-08-29 20:39:53.110386 | orchestrator | ok: [testbed-manager] =>  2025-08-29 20:39:53.110391 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 20:39:53.110397 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 20:39:53.110403 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 20:39:53.110408 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 20:39:53.110414 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 20:39:53.110419 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 20:39:53.110425 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 20:39:53.110430 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 20:39:53.110435 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 20:39:53.110441 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 20:39:53.110447 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 20:39:53.110453 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 20:39:53.110458 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 20:39:53.110463 | orchestrator | 2025-08-29 20:39:53.110469 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-08-29 20:39:53.110474 | orchestrator | Friday 29 August 2025 20:39:47 +0000 (0:00:00.377) 0:05:23.355 ********* 2025-08-29 20:39:53.110480 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:39:53.110490 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:39:53.110496 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:39:53.110501 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:39:53.110506 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:39:53.110512 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:39:53.110518 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:39:53.110524 | orchestrator | 2025-08-29 20:39:53.110529 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-08-29 20:39:53.110535 | orchestrator | Friday 29 August 2025 20:39:48 +0000 (0:00:00.261) 0:05:23.616 ********* 2025-08-29 20:39:53.110541 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:39:53.110547 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:39:53.110552 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:39:53.110558 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:39:53.110563 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:39:53.110569 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:39:53.110575 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:39:53.110580 | orchestrator | 2025-08-29 20:39:53.110585 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-08-29 20:39:53.110591 | orchestrator | Friday 29 August 2025 20:39:48 +0000 (0:00:00.262) 0:05:23.879 ********* 2025-08-29 20:39:53.110599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:39:53.110607 | orchestrator | 2025-08-29 20:39:53.110612 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-08-29 20:39:53.110618 | orchestrator | Friday 29 August 2025 20:39:48 +0000 (0:00:00.442) 0:05:24.321 ********* 2025-08-29 20:39:53.110623 | orchestrator | ok: [testbed-manager] 2025-08-29 20:39:53.110629 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:39:53.110634 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:39:53.110640 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:39:53.110645 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:39:53.110651 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:39:53.110656 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:39:53.110662 | orchestrator | 2025-08-29 20:39:53.110667 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-08-29 20:39:53.110673 | orchestrator | Friday 29 August 2025 20:39:49 +0000 (0:00:00.878) 0:05:25.200 ********* 2025-08-29 20:39:53.110679 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:39:53.110684 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:39:53.110690 | orchestrator | ok: [testbed-manager] 2025-08-29 20:39:53.110695 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:39:53.110701 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:39:53.110707 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:39:53.110712 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:39:53.110718 | orchestrator | 2025-08-29 20:39:53.110724 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-08-29 20:39:53.110731 | orchestrator | Friday 29 August 2025 20:39:52 +0000 (0:00:02.801) 0:05:28.001 ********* 2025-08-29 20:39:53.110736 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-08-29 20:39:53.110742 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-08-29 20:39:53.110747 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-08-29 20:39:53.110756 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-08-29 20:39:53.110762 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-08-29 20:39:53.110768 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-08-29 20:39:53.110773 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:39:53.110779 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-08-29 20:39:53.110784 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-08-29 20:39:53.110790 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-08-29 20:39:53.110799 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:39:53.110805 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-08-29 20:39:53.110811 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-08-29 20:39:53.110816 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-08-29 20:39:53.110822 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:39:53.110827 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-08-29 20:39:53.110833 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-08-29 20:39:53.110843 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-08-29 20:40:52.332106 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:40:52.332223 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-08-29 20:40:52.332238 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-08-29 20:40:52.332249 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-08-29 20:40:52.332259 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:40:52.332328 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:40:52.332340 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-08-29 20:40:52.332350 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-08-29 20:40:52.332360 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-08-29 20:40:52.332371 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:40:52.332381 | orchestrator | 2025-08-29 20:40:52.332392 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-08-29 20:40:52.332404 | orchestrator | Friday 29 August 2025 20:39:53 +0000 (0:00:00.747) 0:05:28.749 ********* 2025-08-29 20:40:52.332414 | orchestrator | ok: [testbed-manager] 2025-08-29 20:40:52.332424 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:40:52.332434 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:40:52.332444 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:40:52.332454 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:40:52.332464 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:40:52.332474 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:40:52.332484 | orchestrator | 2025-08-29 20:40:52.332494 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-08-29 20:40:52.332504 | orchestrator | Friday 29 August 2025 20:39:59 +0000 (0:00:06.308) 0:05:35.058 ********* 2025-08-29 20:40:52.332513 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:40:52.332523 | orchestrator | ok: [testbed-manager] 2025-08-29 20:40:52.332533 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:40:52.332543 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:40:52.332553 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:40:52.332563 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:40:52.332574 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:40:52.332584 | orchestrator | 2025-08-29 20:40:52.332593 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-08-29 20:40:52.332603 | orchestrator | Friday 29 August 2025 20:40:00 +0000 (0:00:01.091) 0:05:36.149 ********* 2025-08-29 20:40:52.332613 | orchestrator | ok: [testbed-manager] 2025-08-29 20:40:52.332623 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:40:52.332633 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:40:52.332644 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:40:52.332655 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:40:52.332666 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:40:52.332677 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:40:52.332688 | orchestrator | 2025-08-29 20:40:52.332699 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-08-29 20:40:52.332710 | orchestrator | Friday 29 August 2025 20:40:08 +0000 (0:00:07.795) 0:05:43.944 ********* 2025-08-29 20:40:52.332721 | orchestrator | changed: [testbed-manager] 2025-08-29 20:40:52.332732 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:40:52.332743 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:40:52.332780 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:40:52.332791 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:40:52.332802 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:40:52.332813 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:40:52.332822 | orchestrator | 2025-08-29 20:40:52.332832 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-08-29 20:40:52.332842 | orchestrator | Friday 29 August 2025 20:40:11 +0000 (0:00:03.218) 0:05:47.163 ********* 2025-08-29 20:40:52.332852 | orchestrator | ok: [testbed-manager] 2025-08-29 20:40:52.332862 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:40:52.332871 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:40:52.332881 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:40:52.332891 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:40:52.332901 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:40:52.332910 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:40:52.332920 | orchestrator | 2025-08-29 20:40:52.332931 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-08-29 20:40:52.332940 | orchestrator | Friday 29 August 2025 20:40:13 +0000 (0:00:01.537) 0:05:48.700 ********* 2025-08-29 20:40:52.332950 | orchestrator | ok: [testbed-manager] 2025-08-29 20:40:52.332960 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:40:52.332970 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:40:52.332979 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:40:52.332989 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:40:52.332999 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:40:52.333008 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:40:52.333018 | orchestrator | 2025-08-29 20:40:52.333028 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-08-29 20:40:52.333038 | orchestrator | Friday 29 August 2025 20:40:14 +0000 (0:00:01.333) 0:05:50.033 ********* 2025-08-29 20:40:52.333047 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:40:52.333072 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:40:52.333082 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:40:52.333091 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:40:52.333101 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:40:52.333111 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:40:52.333121 | orchestrator | changed: [testbed-manager] 2025-08-29 20:40:52.333130 | orchestrator | 2025-08-29 20:40:52.333140 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-08-29 20:40:52.333150 | orchestrator | Friday 29 August 2025 20:40:15 +0000 (0:00:00.650) 0:05:50.684 ********* 2025-08-29 20:40:52.333160 | orchestrator | ok: [testbed-manager] 2025-08-29 20:40:52.333170 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:40:52.333179 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:40:52.333189 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:40:52.333199 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:40:52.333208 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:40:52.333218 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:40:52.333228 | orchestrator | 2025-08-29 20:40:52.333238 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-08-29 20:40:52.333248 | orchestrator | Friday 29 August 2025 20:40:24 +0000 (0:00:09.716) 0:06:00.401 ********* 2025-08-29 20:40:52.333258 | orchestrator | changed: [testbed-manager] 2025-08-29 20:40:52.333304 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:40:52.333315 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:40:52.333325 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:40:52.333335 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:40:52.333344 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:40:52.333354 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:40:52.333363 | orchestrator | 2025-08-29 20:40:52.333373 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-08-29 20:40:52.333383 | orchestrator | Friday 29 August 2025 20:40:25 +0000 (0:00:00.902) 0:06:01.304 ********* 2025-08-29 20:40:52.333401 | orchestrator | ok: [testbed-manager] 2025-08-29 20:40:52.333411 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:40:52.333421 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:40:52.333430 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:40:52.333440 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:40:52.333450 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:40:52.333459 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:40:52.333469 | orchestrator | 2025-08-29 20:40:52.333479 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-08-29 20:40:52.333488 | orchestrator | Friday 29 August 2025 20:40:34 +0000 (0:00:08.777) 0:06:10.081 ********* 2025-08-29 20:40:52.333498 | orchestrator | ok: [testbed-manager] 2025-08-29 20:40:52.333508 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:40:52.333518 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:40:52.333527 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:40:52.333537 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:40:52.333546 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:40:52.333556 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:40:52.333566 | orchestrator | 2025-08-29 20:40:52.333575 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-08-29 20:40:52.333585 | orchestrator | Friday 29 August 2025 20:40:45 +0000 (0:00:10.850) 0:06:20.932 ********* 2025-08-29 20:40:52.333595 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-08-29 20:40:52.333605 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-08-29 20:40:52.333614 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-08-29 20:40:52.333624 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-08-29 20:40:52.333634 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-08-29 20:40:52.333643 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-08-29 20:40:52.333653 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-08-29 20:40:52.333663 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-08-29 20:40:52.333672 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-08-29 20:40:52.333682 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-08-29 20:40:52.333691 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-08-29 20:40:52.333701 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-08-29 20:40:52.333711 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-08-29 20:40:52.333720 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-08-29 20:40:52.333730 | orchestrator | 2025-08-29 20:40:52.333740 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-08-29 20:40:52.333750 | orchestrator | Friday 29 August 2025 20:40:46 +0000 (0:00:01.200) 0:06:22.132 ********* 2025-08-29 20:40:52.333759 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:40:52.333769 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:40:52.333778 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:40:52.333788 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:40:52.333798 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:40:52.333807 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:40:52.333817 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:40:52.333827 | orchestrator | 2025-08-29 20:40:52.333837 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-08-29 20:40:52.333847 | orchestrator | Friday 29 August 2025 20:40:47 +0000 (0:00:00.511) 0:06:22.643 ********* 2025-08-29 20:40:52.333856 | orchestrator | ok: [testbed-manager] 2025-08-29 20:40:52.333866 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:40:52.333876 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:40:52.333885 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:40:52.333895 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:40:52.333904 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:40:52.333914 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:40:52.333924 | orchestrator | 2025-08-29 20:40:52.333934 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-08-29 20:40:52.333950 | orchestrator | Friday 29 August 2025 20:40:51 +0000 (0:00:04.359) 0:06:27.003 ********* 2025-08-29 20:40:52.333960 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:40:52.333970 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:40:52.333979 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:40:52.333989 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:40:52.333999 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:40:52.334008 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:40:52.334087 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:40:52.334098 | orchestrator | 2025-08-29 20:40:52.334109 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-08-29 20:40:52.334119 | orchestrator | Friday 29 August 2025 20:40:52 +0000 (0:00:00.527) 0:06:27.530 ********* 2025-08-29 20:40:52.334129 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-08-29 20:40:52.334139 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-08-29 20:40:52.334148 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:40:52.334158 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-08-29 20:40:52.334168 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-08-29 20:40:52.334178 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:40:52.334187 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-08-29 20:40:52.334197 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-08-29 20:40:52.334207 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:40:52.334217 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-08-29 20:40:52.334234 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-08-29 20:41:12.350128 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:41:12.350247 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-08-29 20:41:12.350265 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-08-29 20:41:12.350343 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:41:12.350356 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-08-29 20:41:12.350367 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-08-29 20:41:12.350378 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:41:12.350390 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-08-29 20:41:12.350401 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-08-29 20:41:12.350412 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:41:12.350423 | orchestrator | 2025-08-29 20:41:12.350437 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-08-29 20:41:12.350450 | orchestrator | Friday 29 August 2025 20:40:52 +0000 (0:00:00.555) 0:06:28.085 ********* 2025-08-29 20:41:12.350461 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:41:12.350472 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:41:12.350483 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:41:12.350494 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:41:12.350505 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:41:12.350516 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:41:12.350527 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:41:12.350538 | orchestrator | 2025-08-29 20:41:12.350550 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-08-29 20:41:12.350561 | orchestrator | Friday 29 August 2025 20:40:53 +0000 (0:00:00.562) 0:06:28.648 ********* 2025-08-29 20:41:12.350572 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:41:12.350583 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:41:12.350594 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:41:12.350605 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:41:12.350618 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:41:12.350630 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:41:12.350667 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:41:12.350680 | orchestrator | 2025-08-29 20:41:12.350692 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-08-29 20:41:12.350705 | orchestrator | Friday 29 August 2025 20:40:53 +0000 (0:00:00.515) 0:06:29.164 ********* 2025-08-29 20:41:12.350717 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:41:12.350729 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:41:12.350742 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:41:12.350754 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:41:12.350766 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:41:12.350779 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:41:12.350791 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:41:12.350803 | orchestrator | 2025-08-29 20:41:12.350815 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-08-29 20:41:12.350827 | orchestrator | Friday 29 August 2025 20:40:54 +0000 (0:00:00.719) 0:06:29.884 ********* 2025-08-29 20:41:12.350839 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:12.350852 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:41:12.350863 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:41:12.350875 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:41:12.350888 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:41:12.350901 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:41:12.350913 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:41:12.350925 | orchestrator | 2025-08-29 20:41:12.350938 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-08-29 20:41:12.350950 | orchestrator | Friday 29 August 2025 20:40:56 +0000 (0:00:01.948) 0:06:31.832 ********* 2025-08-29 20:41:12.350964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:41:12.350979 | orchestrator | 2025-08-29 20:41:12.350991 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-08-29 20:41:12.351004 | orchestrator | Friday 29 August 2025 20:40:57 +0000 (0:00:00.827) 0:06:32.659 ********* 2025-08-29 20:41:12.351015 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:12.351027 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:41:12.351038 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:41:12.351049 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:41:12.351059 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:41:12.351070 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:41:12.351081 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:41:12.351092 | orchestrator | 2025-08-29 20:41:12.351102 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-08-29 20:41:12.351113 | orchestrator | Friday 29 August 2025 20:40:58 +0000 (0:00:00.863) 0:06:33.523 ********* 2025-08-29 20:41:12.351124 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:12.351135 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:41:12.351146 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:41:12.351156 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:41:12.351167 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:41:12.351178 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:41:12.351188 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:41:12.351199 | orchestrator | 2025-08-29 20:41:12.351210 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-08-29 20:41:12.351221 | orchestrator | Friday 29 August 2025 20:40:59 +0000 (0:00:01.030) 0:06:34.553 ********* 2025-08-29 20:41:12.351232 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:12.351243 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:41:12.351253 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:41:12.351264 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:41:12.351294 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:41:12.351306 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:41:12.351317 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:41:12.351337 | orchestrator | 2025-08-29 20:41:12.351348 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-08-29 20:41:12.351359 | orchestrator | Friday 29 August 2025 20:41:00 +0000 (0:00:01.310) 0:06:35.863 ********* 2025-08-29 20:41:12.351387 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:41:12.351399 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:41:12.351410 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:41:12.351421 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:41:12.351431 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:41:12.351442 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:41:12.351453 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:41:12.351464 | orchestrator | 2025-08-29 20:41:12.351475 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-08-29 20:41:12.351487 | orchestrator | Friday 29 August 2025 20:41:01 +0000 (0:00:01.371) 0:06:37.235 ********* 2025-08-29 20:41:12.351498 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:12.351508 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:41:12.351519 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:41:12.351530 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:41:12.351541 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:41:12.351552 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:41:12.351563 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:41:12.351573 | orchestrator | 2025-08-29 20:41:12.351584 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-08-29 20:41:12.351595 | orchestrator | Friday 29 August 2025 20:41:03 +0000 (0:00:01.316) 0:06:38.551 ********* 2025-08-29 20:41:12.351606 | orchestrator | changed: [testbed-manager] 2025-08-29 20:41:12.351617 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:41:12.351628 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:41:12.351639 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:41:12.351650 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:41:12.351660 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:41:12.351671 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:41:12.351682 | orchestrator | 2025-08-29 20:41:12.351693 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-08-29 20:41:12.351704 | orchestrator | Friday 29 August 2025 20:41:04 +0000 (0:00:01.526) 0:06:40.078 ********* 2025-08-29 20:41:12.351733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:41:12.351745 | orchestrator | 2025-08-29 20:41:12.351756 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-08-29 20:41:12.351767 | orchestrator | Friday 29 August 2025 20:41:05 +0000 (0:00:00.841) 0:06:40.920 ********* 2025-08-29 20:41:12.351778 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:12.351789 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:41:12.351800 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:41:12.351811 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:41:12.351822 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:41:12.351832 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:41:12.351857 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:41:12.351878 | orchestrator | 2025-08-29 20:41:12.351890 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-08-29 20:41:12.351901 | orchestrator | Friday 29 August 2025 20:41:06 +0000 (0:00:01.468) 0:06:42.389 ********* 2025-08-29 20:41:12.351912 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:12.351923 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:41:12.351934 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:41:12.351945 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:41:12.351955 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:41:12.351966 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:41:12.351977 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:41:12.351988 | orchestrator | 2025-08-29 20:41:12.351999 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-08-29 20:41:12.352018 | orchestrator | Friday 29 August 2025 20:41:08 +0000 (0:00:01.170) 0:06:43.559 ********* 2025-08-29 20:41:12.352029 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:12.352040 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:41:12.352051 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:41:12.352061 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:41:12.352072 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:41:12.352083 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:41:12.352093 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:41:12.352104 | orchestrator | 2025-08-29 20:41:12.352115 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-08-29 20:41:12.352126 | orchestrator | Friday 29 August 2025 20:41:09 +0000 (0:00:01.397) 0:06:44.956 ********* 2025-08-29 20:41:12.352137 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:12.352148 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:41:12.352158 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:41:12.352169 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:41:12.352180 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:41:12.352191 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:41:12.352201 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:41:12.352212 | orchestrator | 2025-08-29 20:41:12.352223 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-08-29 20:41:12.352234 | orchestrator | Friday 29 August 2025 20:41:11 +0000 (0:00:01.673) 0:06:46.629 ********* 2025-08-29 20:41:12.352250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:41:12.352262 | orchestrator | 2025-08-29 20:41:12.352273 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 20:41:12.352302 | orchestrator | Friday 29 August 2025 20:41:12 +0000 (0:00:00.903) 0:06:47.533 ********* 2025-08-29 20:41:12.352313 | orchestrator | 2025-08-29 20:41:12.352324 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 20:41:12.352335 | orchestrator | Friday 29 August 2025 20:41:12 +0000 (0:00:00.038) 0:06:47.572 ********* 2025-08-29 20:41:12.352346 | orchestrator | 2025-08-29 20:41:12.352357 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 20:41:12.352367 | orchestrator | Friday 29 August 2025 20:41:12 +0000 (0:00:00.043) 0:06:47.615 ********* 2025-08-29 20:41:12.352378 | orchestrator | 2025-08-29 20:41:12.352389 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 20:41:12.352400 | orchestrator | Friday 29 August 2025 20:41:12 +0000 (0:00:00.037) 0:06:47.653 ********* 2025-08-29 20:41:12.352411 | orchestrator | 2025-08-29 20:41:12.352429 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 20:41:40.517779 | orchestrator | Friday 29 August 2025 20:41:12 +0000 (0:00:00.037) 0:06:47.690 ********* 2025-08-29 20:41:40.517888 | orchestrator | 2025-08-29 20:41:40.517903 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 20:41:40.517914 | orchestrator | Friday 29 August 2025 20:41:12 +0000 (0:00:00.042) 0:06:47.732 ********* 2025-08-29 20:41:40.517924 | orchestrator | 2025-08-29 20:41:40.517934 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 20:41:40.517944 | orchestrator | Friday 29 August 2025 20:41:12 +0000 (0:00:00.053) 0:06:47.786 ********* 2025-08-29 20:41:40.517953 | orchestrator | 2025-08-29 20:41:40.517964 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 20:41:40.517974 | orchestrator | Friday 29 August 2025 20:41:12 +0000 (0:00:00.038) 0:06:47.824 ********* 2025-08-29 20:41:40.517984 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:41:40.517995 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:41:40.518005 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:41:40.518067 | orchestrator | 2025-08-29 20:41:40.518080 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-08-29 20:41:40.518118 | orchestrator | Friday 29 August 2025 20:41:13 +0000 (0:00:01.507) 0:06:49.332 ********* 2025-08-29 20:41:40.518129 | orchestrator | changed: [testbed-manager] 2025-08-29 20:41:40.518140 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:41:40.518150 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:41:40.518159 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:41:40.518169 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:41:40.518179 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:41:40.518188 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:41:40.518198 | orchestrator | 2025-08-29 20:41:40.518208 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-08-29 20:41:40.518218 | orchestrator | Friday 29 August 2025 20:41:15 +0000 (0:00:01.736) 0:06:51.068 ********* 2025-08-29 20:41:40.518227 | orchestrator | changed: [testbed-manager] 2025-08-29 20:41:40.518237 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:41:40.518247 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:41:40.518256 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:41:40.518265 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:41:40.518275 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:41:40.518285 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:41:40.518294 | orchestrator | 2025-08-29 20:41:40.518326 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-08-29 20:41:40.518337 | orchestrator | Friday 29 August 2025 20:41:16 +0000 (0:00:01.245) 0:06:52.314 ********* 2025-08-29 20:41:40.518348 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:41:40.518358 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:41:40.518369 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:41:40.518380 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:41:40.518391 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:41:40.518402 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:41:40.518413 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:41:40.518423 | orchestrator | 2025-08-29 20:41:40.518434 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-08-29 20:41:40.518445 | orchestrator | Friday 29 August 2025 20:41:19 +0000 (0:00:02.616) 0:06:54.931 ********* 2025-08-29 20:41:40.518456 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:41:40.518467 | orchestrator | 2025-08-29 20:41:40.518478 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-08-29 20:41:40.518489 | orchestrator | Friday 29 August 2025 20:41:19 +0000 (0:00:00.099) 0:06:55.030 ********* 2025-08-29 20:41:40.518499 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:40.518510 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:41:40.518521 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:41:40.518532 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:41:40.518543 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:41:40.518553 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:41:40.518564 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:41:40.518574 | orchestrator | 2025-08-29 20:41:40.518585 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-08-29 20:41:40.518597 | orchestrator | Friday 29 August 2025 20:41:20 +0000 (0:00:01.126) 0:06:56.157 ********* 2025-08-29 20:41:40.518607 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:41:40.518618 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:41:40.518629 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:41:40.518639 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:41:40.518650 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:41:40.518661 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:41:40.518671 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:41:40.518682 | orchestrator | 2025-08-29 20:41:40.518693 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-08-29 20:41:40.518703 | orchestrator | Friday 29 August 2025 20:41:21 +0000 (0:00:00.833) 0:06:56.990 ********* 2025-08-29 20:41:40.518727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:41:40.518749 | orchestrator | 2025-08-29 20:41:40.518760 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-08-29 20:41:40.518769 | orchestrator | Friday 29 August 2025 20:41:22 +0000 (0:00:00.929) 0:06:57.920 ********* 2025-08-29 20:41:40.518779 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:40.518789 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:41:40.518798 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:41:40.518808 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:41:40.518818 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:41:40.518827 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:41:40.518836 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:41:40.518846 | orchestrator | 2025-08-29 20:41:40.518855 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-08-29 20:41:40.518865 | orchestrator | Friday 29 August 2025 20:41:23 +0000 (0:00:00.877) 0:06:58.797 ********* 2025-08-29 20:41:40.518875 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-08-29 20:41:40.518885 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-08-29 20:41:40.518911 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-08-29 20:41:40.518922 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-08-29 20:41:40.518931 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-08-29 20:41:40.518941 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-08-29 20:41:40.518951 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-08-29 20:41:40.518960 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-08-29 20:41:40.518970 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-08-29 20:41:40.518980 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-08-29 20:41:40.518989 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-08-29 20:41:40.518999 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-08-29 20:41:40.519008 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-08-29 20:41:40.519018 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-08-29 20:41:40.519027 | orchestrator | 2025-08-29 20:41:40.519037 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-08-29 20:41:40.519047 | orchestrator | Friday 29 August 2025 20:41:26 +0000 (0:00:02.800) 0:07:01.598 ********* 2025-08-29 20:41:40.519056 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:41:40.519066 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:41:40.519075 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:41:40.519085 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:41:40.519094 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:41:40.519104 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:41:40.519113 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:41:40.519123 | orchestrator | 2025-08-29 20:41:40.519133 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-08-29 20:41:40.519155 | orchestrator | Friday 29 August 2025 20:41:26 +0000 (0:00:00.571) 0:07:02.169 ********* 2025-08-29 20:41:40.519177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:41:40.519188 | orchestrator | 2025-08-29 20:41:40.519198 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-08-29 20:41:40.519208 | orchestrator | Friday 29 August 2025 20:41:27 +0000 (0:00:00.878) 0:07:03.048 ********* 2025-08-29 20:41:40.519218 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:40.519227 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:41:40.519237 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:41:40.519253 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:41:40.519264 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:41:40.519279 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:41:40.519296 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:41:40.519337 | orchestrator | 2025-08-29 20:41:40.519354 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-08-29 20:41:40.519370 | orchestrator | Friday 29 August 2025 20:41:28 +0000 (0:00:01.212) 0:07:04.261 ********* 2025-08-29 20:41:40.519386 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:40.519401 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:41:40.519417 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:41:40.519433 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:41:40.519450 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:41:40.519466 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:41:40.519482 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:41:40.519499 | orchestrator | 2025-08-29 20:41:40.519514 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-08-29 20:41:40.519528 | orchestrator | Friday 29 August 2025 20:41:29 +0000 (0:00:00.884) 0:07:05.145 ********* 2025-08-29 20:41:40.519544 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:41:40.519560 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:41:40.519575 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:41:40.519591 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:41:40.519607 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:41:40.519623 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:41:40.519640 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:41:40.519655 | orchestrator | 2025-08-29 20:41:40.519672 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-08-29 20:41:40.519690 | orchestrator | Friday 29 August 2025 20:41:30 +0000 (0:00:00.549) 0:07:05.695 ********* 2025-08-29 20:41:40.519708 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:40.519725 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:41:40.519742 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:41:40.519758 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:41:40.519774 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:41:40.519791 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:41:40.519807 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:41:40.519823 | orchestrator | 2025-08-29 20:41:40.519850 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-08-29 20:41:40.519868 | orchestrator | Friday 29 August 2025 20:41:31 +0000 (0:00:01.592) 0:07:07.288 ********* 2025-08-29 20:41:40.519883 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:41:40.519900 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:41:40.519917 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:41:40.519933 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:41:40.520025 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:41:40.520039 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:41:40.520049 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:41:40.520060 | orchestrator | 2025-08-29 20:41:40.520077 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-08-29 20:41:40.520098 | orchestrator | Friday 29 August 2025 20:41:32 +0000 (0:00:00.503) 0:07:07.791 ********* 2025-08-29 20:41:40.520119 | orchestrator | ok: [testbed-manager] 2025-08-29 20:41:40.520134 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:41:40.520150 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:41:40.520165 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:41:40.520180 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:41:40.520196 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:41:40.520212 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:41:40.520229 | orchestrator | 2025-08-29 20:41:40.520264 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-08-29 20:42:13.347028 | orchestrator | Friday 29 August 2025 20:41:40 +0000 (0:00:08.204) 0:07:15.996 ********* 2025-08-29 20:42:13.347147 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:13.347166 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:42:13.347204 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:42:13.347216 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:42:13.347227 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:42:13.347238 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:42:13.347249 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:42:13.347260 | orchestrator | 2025-08-29 20:42:13.347272 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-08-29 20:42:13.347284 | orchestrator | Friday 29 August 2025 20:41:41 +0000 (0:00:01.459) 0:07:17.456 ********* 2025-08-29 20:42:13.347295 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:13.347306 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:42:13.347316 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:42:13.347328 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:42:13.347377 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:42:13.347388 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:42:13.347399 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:42:13.347410 | orchestrator | 2025-08-29 20:42:13.347421 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-08-29 20:42:13.347432 | orchestrator | Friday 29 August 2025 20:41:43 +0000 (0:00:01.889) 0:07:19.345 ********* 2025-08-29 20:42:13.347443 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:13.347454 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:42:13.347465 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:42:13.347475 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:42:13.347486 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:42:13.347497 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:42:13.347507 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:42:13.347518 | orchestrator | 2025-08-29 20:42:13.347529 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 20:42:13.347540 | orchestrator | Friday 29 August 2025 20:41:45 +0000 (0:00:02.022) 0:07:21.368 ********* 2025-08-29 20:42:13.347551 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:13.347564 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:13.347576 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:13.347590 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:13.347602 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:13.347614 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:13.347626 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:13.347638 | orchestrator | 2025-08-29 20:42:13.347650 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 20:42:13.347662 | orchestrator | Friday 29 August 2025 20:41:46 +0000 (0:00:00.872) 0:07:22.241 ********* 2025-08-29 20:42:13.347675 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:42:13.347687 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:42:13.347699 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:42:13.347711 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:42:13.347723 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:42:13.347735 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:42:13.347747 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:42:13.347759 | orchestrator | 2025-08-29 20:42:13.347771 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-08-29 20:42:13.347784 | orchestrator | Friday 29 August 2025 20:41:47 +0000 (0:00:00.818) 0:07:23.059 ********* 2025-08-29 20:42:13.347796 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:42:13.347808 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:42:13.347819 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:42:13.347832 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:42:13.347844 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:42:13.347855 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:42:13.347867 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:42:13.347879 | orchestrator | 2025-08-29 20:42:13.347891 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-08-29 20:42:13.347904 | orchestrator | Friday 29 August 2025 20:41:48 +0000 (0:00:00.493) 0:07:23.553 ********* 2025-08-29 20:42:13.347924 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:13.347935 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:13.347946 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:13.347957 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:13.347968 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:13.347978 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:13.347989 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:13.348000 | orchestrator | 2025-08-29 20:42:13.348011 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-08-29 20:42:13.348022 | orchestrator | Friday 29 August 2025 20:41:48 +0000 (0:00:00.709) 0:07:24.262 ********* 2025-08-29 20:42:13.348033 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:13.348043 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:13.348054 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:13.348065 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:13.348075 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:13.348086 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:13.348096 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:13.348107 | orchestrator | 2025-08-29 20:42:13.348133 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-08-29 20:42:13.348145 | orchestrator | Friday 29 August 2025 20:41:49 +0000 (0:00:00.562) 0:07:24.825 ********* 2025-08-29 20:42:13.348155 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:13.348166 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:13.348177 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:13.348188 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:13.348198 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:13.348209 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:13.348220 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:13.348230 | orchestrator | 2025-08-29 20:42:13.348241 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-08-29 20:42:13.348252 | orchestrator | Friday 29 August 2025 20:41:49 +0000 (0:00:00.627) 0:07:25.453 ********* 2025-08-29 20:42:13.348263 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:13.348274 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:13.348284 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:13.348295 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:13.348306 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:13.348316 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:13.348327 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:13.348358 | orchestrator | 2025-08-29 20:42:13.348370 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-08-29 20:42:13.348398 | orchestrator | Friday 29 August 2025 20:41:55 +0000 (0:00:05.612) 0:07:31.066 ********* 2025-08-29 20:42:13.348410 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:42:13.348421 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:42:13.348432 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:42:13.348443 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:42:13.348454 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:42:13.348465 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:42:13.348476 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:42:13.348486 | orchestrator | 2025-08-29 20:42:13.348497 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-08-29 20:42:13.348508 | orchestrator | Friday 29 August 2025 20:41:56 +0000 (0:00:00.546) 0:07:31.612 ********* 2025-08-29 20:42:13.348521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:42:13.348535 | orchestrator | 2025-08-29 20:42:13.348546 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-08-29 20:42:13.348557 | orchestrator | Friday 29 August 2025 20:41:57 +0000 (0:00:01.017) 0:07:32.629 ********* 2025-08-29 20:42:13.348568 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:13.348579 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:13.348597 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:13.348608 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:13.348619 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:13.348630 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:13.348641 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:13.348652 | orchestrator | 2025-08-29 20:42:13.348663 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-08-29 20:42:13.348674 | orchestrator | Friday 29 August 2025 20:41:58 +0000 (0:00:01.853) 0:07:34.483 ********* 2025-08-29 20:42:13.348685 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:13.348696 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:13.348707 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:13.348717 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:13.348728 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:13.348738 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:13.348749 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:13.348760 | orchestrator | 2025-08-29 20:42:13.348770 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-08-29 20:42:13.348781 | orchestrator | Friday 29 August 2025 20:42:00 +0000 (0:00:01.245) 0:07:35.728 ********* 2025-08-29 20:42:13.348792 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:13.348803 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:13.348813 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:13.348824 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:13.348835 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:13.348845 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:13.348856 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:13.348867 | orchestrator | 2025-08-29 20:42:13.348878 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-08-29 20:42:13.348889 | orchestrator | Friday 29 August 2025 20:42:01 +0000 (0:00:01.059) 0:07:36.788 ********* 2025-08-29 20:42:13.348900 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 20:42:13.348912 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 20:42:13.348923 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 20:42:13.348934 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 20:42:13.348945 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 20:42:13.348956 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 20:42:13.348968 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 20:42:13.348978 | orchestrator | 2025-08-29 20:42:13.348989 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-08-29 20:42:13.349000 | orchestrator | Friday 29 August 2025 20:42:02 +0000 (0:00:01.696) 0:07:38.484 ********* 2025-08-29 20:42:13.349012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:42:13.349024 | orchestrator | 2025-08-29 20:42:13.349035 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-08-29 20:42:13.349046 | orchestrator | Friday 29 August 2025 20:42:03 +0000 (0:00:00.767) 0:07:39.252 ********* 2025-08-29 20:42:13.349057 | orchestrator | changed: [testbed-manager] 2025-08-29 20:42:13.349068 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:42:13.349086 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:42:13.349097 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:42:13.349108 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:42:13.349119 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:42:13.349129 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:42:13.349140 | orchestrator | 2025-08-29 20:42:13.349151 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-08-29 20:42:13.349168 | orchestrator | Friday 29 August 2025 20:42:13 +0000 (0:00:09.560) 0:07:48.813 ********* 2025-08-29 20:42:30.611888 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:30.611997 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:30.612010 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:30.612021 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:30.612031 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:30.612041 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:30.612051 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:30.612061 | orchestrator | 2025-08-29 20:42:30.612073 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-08-29 20:42:30.612084 | orchestrator | Friday 29 August 2025 20:42:16 +0000 (0:00:02.763) 0:07:51.576 ********* 2025-08-29 20:42:30.612094 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:30.612104 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:30.612114 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:30.612124 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:30.612134 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:30.612143 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:30.612153 | orchestrator | 2025-08-29 20:42:30.612163 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-08-29 20:42:30.612173 | orchestrator | Friday 29 August 2025 20:42:17 +0000 (0:00:01.304) 0:07:52.881 ********* 2025-08-29 20:42:30.612183 | orchestrator | changed: [testbed-manager] 2025-08-29 20:42:30.612194 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:42:30.612204 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:42:30.612214 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:42:30.612224 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:42:30.612233 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:42:30.612243 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:42:30.612253 | orchestrator | 2025-08-29 20:42:30.612263 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-08-29 20:42:30.612273 | orchestrator | 2025-08-29 20:42:30.612283 | orchestrator | TASK [Include hardening role] ************************************************** 2025-08-29 20:42:30.612293 | orchestrator | Friday 29 August 2025 20:42:18 +0000 (0:00:01.543) 0:07:54.425 ********* 2025-08-29 20:42:30.612302 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:42:30.612312 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:42:30.612322 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:42:30.612332 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:42:30.612342 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:42:30.612352 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:42:30.612409 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:42:30.612426 | orchestrator | 2025-08-29 20:42:30.612445 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-08-29 20:42:30.612462 | orchestrator | 2025-08-29 20:42:30.612474 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-08-29 20:42:30.612485 | orchestrator | Friday 29 August 2025 20:42:19 +0000 (0:00:00.564) 0:07:54.990 ********* 2025-08-29 20:42:30.612496 | orchestrator | changed: [testbed-manager] 2025-08-29 20:42:30.612506 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:42:30.612517 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:42:30.612528 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:42:30.612540 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:42:30.612551 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:42:30.612562 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:42:30.612573 | orchestrator | 2025-08-29 20:42:30.612606 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-08-29 20:42:30.612618 | orchestrator | Friday 29 August 2025 20:42:21 +0000 (0:00:01.560) 0:07:56.550 ********* 2025-08-29 20:42:30.612628 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:30.612639 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:30.612650 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:30.612661 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:30.612671 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:30.612681 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:30.612736 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:30.612748 | orchestrator | 2025-08-29 20:42:30.612759 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-08-29 20:42:30.612769 | orchestrator | Friday 29 August 2025 20:42:22 +0000 (0:00:01.664) 0:07:58.214 ********* 2025-08-29 20:42:30.612779 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:42:30.612789 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:42:30.612799 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:42:30.612808 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:42:30.612818 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:42:30.612827 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:42:30.612837 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:42:30.612847 | orchestrator | 2025-08-29 20:42:30.612856 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-08-29 20:42:30.612866 | orchestrator | Friday 29 August 2025 20:42:23 +0000 (0:00:00.785) 0:07:59.000 ********* 2025-08-29 20:42:30.612876 | orchestrator | changed: [testbed-manager] 2025-08-29 20:42:30.612886 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:42:30.612895 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:42:30.612905 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:42:30.612914 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:42:30.612928 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:42:30.612938 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:42:30.612947 | orchestrator | 2025-08-29 20:42:30.612957 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-08-29 20:42:30.612967 | orchestrator | 2025-08-29 20:42:30.612976 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-08-29 20:42:30.612986 | orchestrator | Friday 29 August 2025 20:42:24 +0000 (0:00:01.286) 0:08:00.286 ********* 2025-08-29 20:42:30.612996 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:42:30.613008 | orchestrator | 2025-08-29 20:42:30.613017 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 20:42:30.613027 | orchestrator | Friday 29 August 2025 20:42:25 +0000 (0:00:00.888) 0:08:01.174 ********* 2025-08-29 20:42:30.613036 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:30.613046 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:30.613056 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:30.613065 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:30.613075 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:30.613084 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:30.613094 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:30.613103 | orchestrator | 2025-08-29 20:42:30.613129 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 20:42:30.613140 | orchestrator | Friday 29 August 2025 20:42:26 +0000 (0:00:00.839) 0:08:02.014 ********* 2025-08-29 20:42:30.613149 | orchestrator | changed: [testbed-manager] 2025-08-29 20:42:30.613159 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:42:30.613169 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:42:30.613178 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:42:30.613187 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:42:30.613197 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:42:30.613206 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:42:30.613216 | orchestrator | 2025-08-29 20:42:30.613233 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-08-29 20:42:30.613243 | orchestrator | Friday 29 August 2025 20:42:27 +0000 (0:00:01.141) 0:08:03.155 ********* 2025-08-29 20:42:30.613253 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:42:30.613262 | orchestrator | 2025-08-29 20:42:30.613272 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 20:42:30.613281 | orchestrator | Friday 29 August 2025 20:42:28 +0000 (0:00:00.906) 0:08:04.062 ********* 2025-08-29 20:42:30.613291 | orchestrator | ok: [testbed-manager] 2025-08-29 20:42:30.613300 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:42:30.613310 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:42:30.613320 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:42:30.613329 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:42:30.613339 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:42:30.613348 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:42:30.613384 | orchestrator | 2025-08-29 20:42:30.613395 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 20:42:30.613405 | orchestrator | Friday 29 August 2025 20:42:29 +0000 (0:00:00.834) 0:08:04.897 ********* 2025-08-29 20:42:30.613414 | orchestrator | changed: [testbed-manager] 2025-08-29 20:42:30.613424 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:42:30.613434 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:42:30.613443 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:42:30.613453 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:42:30.613462 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:42:30.613472 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:42:30.613481 | orchestrator | 2025-08-29 20:42:30.613491 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:42:30.613502 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-08-29 20:42:30.613512 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-08-29 20:42:30.613522 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 20:42:30.613531 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 20:42:30.613541 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 20:42:30.613551 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 20:42:30.613560 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 20:42:30.613570 | orchestrator | 2025-08-29 20:42:30.613580 | orchestrator | 2025-08-29 20:42:30.613589 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:42:30.613599 | orchestrator | Friday 29 August 2025 20:42:30 +0000 (0:00:01.176) 0:08:06.073 ********* 2025-08-29 20:42:30.613609 | orchestrator | =============================================================================== 2025-08-29 20:42:30.613618 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.65s 2025-08-29 20:42:30.613628 | orchestrator | osism.commons.packages : Download required packages -------------------- 42.43s 2025-08-29 20:42:30.613638 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.23s 2025-08-29 20:42:30.613648 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.77s 2025-08-29 20:42:30.613664 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.35s 2025-08-29 20:42:30.613674 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.85s 2025-08-29 20:42:30.613684 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.80s 2025-08-29 20:42:30.613693 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.72s 2025-08-29 20:42:30.613703 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.56s 2025-08-29 20:42:30.613712 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.78s 2025-08-29 20:42:30.613722 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.38s 2025-08-29 20:42:30.613731 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.31s 2025-08-29 20:42:30.613741 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.27s 2025-08-29 20:42:30.613750 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.20s 2025-08-29 20:42:30.613766 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.83s 2025-08-29 20:42:30.958430 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.80s 2025-08-29 20:42:30.958537 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.31s 2025-08-29 20:42:30.958552 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.92s 2025-08-29 20:42:30.958564 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.85s 2025-08-29 20:42:30.958576 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.72s 2025-08-29 20:42:31.201852 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 20:42:31.201963 | orchestrator | + osism apply network 2025-08-29 20:42:43.449225 | orchestrator | 2025-08-29 20:42:43 | INFO  | Task c16870bd-1f2e-468b-98e1-9989812633f6 (network) was prepared for execution. 2025-08-29 20:42:43.449341 | orchestrator | 2025-08-29 20:42:43 | INFO  | It takes a moment until task c16870bd-1f2e-468b-98e1-9989812633f6 (network) has been started and output is visible here. 2025-08-29 20:43:10.917211 | orchestrator | 2025-08-29 20:43:10.917322 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-08-29 20:43:10.917339 | orchestrator | 2025-08-29 20:43:10.917352 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-08-29 20:43:10.917364 | orchestrator | Friday 29 August 2025 20:42:47 +0000 (0:00:00.258) 0:00:00.258 ********* 2025-08-29 20:43:10.917376 | orchestrator | ok: [testbed-manager] 2025-08-29 20:43:10.917412 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:43:10.917424 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:43:10.917436 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:43:10.917447 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:43:10.917457 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:43:10.917468 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:43:10.917479 | orchestrator | 2025-08-29 20:43:10.917491 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-08-29 20:43:10.917502 | orchestrator | Friday 29 August 2025 20:42:48 +0000 (0:00:00.673) 0:00:00.931 ********* 2025-08-29 20:43:10.917514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:43:10.917528 | orchestrator | 2025-08-29 20:43:10.917539 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-08-29 20:43:10.917550 | orchestrator | Friday 29 August 2025 20:42:49 +0000 (0:00:01.209) 0:00:02.141 ********* 2025-08-29 20:43:10.917561 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:43:10.917572 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:43:10.917583 | orchestrator | ok: [testbed-manager] 2025-08-29 20:43:10.917594 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:43:10.917605 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:43:10.917642 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:43:10.917653 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:43:10.917664 | orchestrator | 2025-08-29 20:43:10.917675 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-08-29 20:43:10.917686 | orchestrator | Friday 29 August 2025 20:42:51 +0000 (0:00:01.765) 0:00:03.906 ********* 2025-08-29 20:43:10.917697 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:43:10.917707 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:43:10.917718 | orchestrator | ok: [testbed-manager] 2025-08-29 20:43:10.917729 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:43:10.917740 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:43:10.917753 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:43:10.917765 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:43:10.917777 | orchestrator | 2025-08-29 20:43:10.917789 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-08-29 20:43:10.917802 | orchestrator | Friday 29 August 2025 20:42:52 +0000 (0:00:01.695) 0:00:05.602 ********* 2025-08-29 20:43:10.917814 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-08-29 20:43:10.917827 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-08-29 20:43:10.917839 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-08-29 20:43:10.917851 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-08-29 20:43:10.917863 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-08-29 20:43:10.917875 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-08-29 20:43:10.917887 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-08-29 20:43:10.917899 | orchestrator | 2025-08-29 20:43:10.917911 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-08-29 20:43:10.917938 | orchestrator | Friday 29 August 2025 20:42:53 +0000 (0:00:00.960) 0:00:06.562 ********* 2025-08-29 20:43:10.917951 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 20:43:10.917964 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 20:43:10.917976 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 20:43:10.917988 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 20:43:10.918001 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 20:43:10.918062 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 20:43:10.918076 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 20:43:10.918089 | orchestrator | 2025-08-29 20:43:10.918101 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-08-29 20:43:10.918112 | orchestrator | Friday 29 August 2025 20:42:57 +0000 (0:00:03.294) 0:00:09.857 ********* 2025-08-29 20:43:10.918123 | orchestrator | changed: [testbed-manager] 2025-08-29 20:43:10.918134 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:43:10.918145 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:43:10.918156 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:43:10.918167 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:43:10.918178 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:43:10.918189 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:43:10.918200 | orchestrator | 2025-08-29 20:43:10.918211 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-08-29 20:43:10.918222 | orchestrator | Friday 29 August 2025 20:42:58 +0000 (0:00:01.453) 0:00:11.310 ********* 2025-08-29 20:43:10.918233 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 20:43:10.918244 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 20:43:10.918254 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 20:43:10.918265 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 20:43:10.918276 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 20:43:10.918287 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 20:43:10.918298 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 20:43:10.918309 | orchestrator | 2025-08-29 20:43:10.918320 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-08-29 20:43:10.918331 | orchestrator | Friday 29 August 2025 20:43:00 +0000 (0:00:01.878) 0:00:13.189 ********* 2025-08-29 20:43:10.918351 | orchestrator | ok: [testbed-manager] 2025-08-29 20:43:10.918363 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:43:10.918374 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:43:10.918413 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:43:10.918425 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:43:10.918437 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:43:10.918447 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:43:10.918458 | orchestrator | 2025-08-29 20:43:10.918470 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-08-29 20:43:10.918498 | orchestrator | Friday 29 August 2025 20:43:01 +0000 (0:00:01.160) 0:00:14.349 ********* 2025-08-29 20:43:10.918510 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:43:10.918521 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:43:10.918532 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:43:10.918543 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:43:10.918554 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:43:10.918565 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:43:10.918576 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:43:10.918587 | orchestrator | 2025-08-29 20:43:10.918598 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-08-29 20:43:10.918609 | orchestrator | Friday 29 August 2025 20:43:02 +0000 (0:00:00.642) 0:00:14.992 ********* 2025-08-29 20:43:10.918620 | orchestrator | ok: [testbed-manager] 2025-08-29 20:43:10.918631 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:43:10.918642 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:43:10.918653 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:43:10.918664 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:43:10.918674 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:43:10.918685 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:43:10.918696 | orchestrator | 2025-08-29 20:43:10.918707 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-08-29 20:43:10.918718 | orchestrator | Friday 29 August 2025 20:43:04 +0000 (0:00:02.024) 0:00:17.017 ********* 2025-08-29 20:43:10.918729 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:43:10.918740 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:43:10.918751 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:43:10.918761 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:43:10.918772 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:43:10.918783 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:43:10.918794 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-08-29 20:43:10.918807 | orchestrator | 2025-08-29 20:43:10.918818 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-08-29 20:43:10.918829 | orchestrator | Friday 29 August 2025 20:43:05 +0000 (0:00:00.881) 0:00:17.898 ********* 2025-08-29 20:43:10.918840 | orchestrator | ok: [testbed-manager] 2025-08-29 20:43:10.918851 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:43:10.918862 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:43:10.918873 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:43:10.918884 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:43:10.918895 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:43:10.918905 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:43:10.918916 | orchestrator | 2025-08-29 20:43:10.918927 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-08-29 20:43:10.918938 | orchestrator | Friday 29 August 2025 20:43:06 +0000 (0:00:01.605) 0:00:19.504 ********* 2025-08-29 20:43:10.918949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:43:10.918962 | orchestrator | 2025-08-29 20:43:10.918973 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 20:43:10.918992 | orchestrator | Friday 29 August 2025 20:43:07 +0000 (0:00:01.195) 0:00:20.699 ********* 2025-08-29 20:43:10.919003 | orchestrator | ok: [testbed-manager] 2025-08-29 20:43:10.919014 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:43:10.919025 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:43:10.919036 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:43:10.919053 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:43:10.919064 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:43:10.919075 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:43:10.919085 | orchestrator | 2025-08-29 20:43:10.919096 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-08-29 20:43:10.919108 | orchestrator | Friday 29 August 2025 20:43:08 +0000 (0:00:00.944) 0:00:21.644 ********* 2025-08-29 20:43:10.919118 | orchestrator | ok: [testbed-manager] 2025-08-29 20:43:10.919129 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:43:10.919140 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:43:10.919151 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:43:10.919161 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:43:10.919172 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:43:10.919183 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:43:10.919194 | orchestrator | 2025-08-29 20:43:10.919205 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 20:43:10.919216 | orchestrator | Friday 29 August 2025 20:43:09 +0000 (0:00:00.800) 0:00:22.444 ********* 2025-08-29 20:43:10.919227 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 20:43:10.919238 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 20:43:10.919249 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 20:43:10.919260 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 20:43:10.919271 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 20:43:10.919282 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 20:43:10.919293 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 20:43:10.919303 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 20:43:10.919314 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 20:43:10.919325 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 20:43:10.919336 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 20:43:10.919347 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 20:43:10.919358 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 20:43:10.919369 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 20:43:10.919380 | orchestrator | 2025-08-29 20:43:10.919430 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-08-29 20:43:25.807929 | orchestrator | Friday 29 August 2025 20:43:10 +0000 (0:00:01.172) 0:00:23.616 ********* 2025-08-29 20:43:25.808034 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:43:25.808051 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:43:25.808063 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:43:25.808075 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:43:25.808086 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:43:25.808097 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:43:25.808108 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:43:25.808119 | orchestrator | 2025-08-29 20:43:25.808131 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-08-29 20:43:25.808143 | orchestrator | Friday 29 August 2025 20:43:11 +0000 (0:00:00.641) 0:00:24.257 ********* 2025-08-29 20:43:25.808155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-3, testbed-node-2, testbed-node-5, testbed-node-4 2025-08-29 20:43:25.808192 | orchestrator | 2025-08-29 20:43:25.808204 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-08-29 20:43:25.808215 | orchestrator | Friday 29 August 2025 20:43:15 +0000 (0:00:04.432) 0:00:28.690 ********* 2025-08-29 20:43:25.808228 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808265 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808276 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808364 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808449 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808461 | orchestrator | 2025-08-29 20:43:25.808472 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-08-29 20:43:25.808484 | orchestrator | Friday 29 August 2025 20:43:20 +0000 (0:00:04.844) 0:00:33.535 ********* 2025-08-29 20:43:25.808498 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808536 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 20:43:25.808600 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:25.808668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:31.641950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 20:43:31.642164 | orchestrator | 2025-08-29 20:43:31.642188 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-08-29 20:43:31.642204 | orchestrator | Friday 29 August 2025 20:43:25 +0000 (0:00:04.971) 0:00:38.507 ********* 2025-08-29 20:43:31.642218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:43:31.642230 | orchestrator | 2025-08-29 20:43:31.642241 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 20:43:31.642253 | orchestrator | Friday 29 August 2025 20:43:26 +0000 (0:00:01.131) 0:00:39.639 ********* 2025-08-29 20:43:31.642264 | orchestrator | ok: [testbed-manager] 2025-08-29 20:43:31.642277 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:43:31.642307 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:43:31.642319 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:43:31.642330 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:43:31.642341 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:43:31.642351 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:43:31.642362 | orchestrator | 2025-08-29 20:43:31.642374 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 20:43:31.642385 | orchestrator | Friday 29 August 2025 20:43:28 +0000 (0:00:01.159) 0:00:40.798 ********* 2025-08-29 20:43:31.642396 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 20:43:31.642437 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 20:43:31.642448 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 20:43:31.642459 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 20:43:31.642471 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 20:43:31.642484 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 20:43:31.642496 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 20:43:31.642509 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 20:43:31.642521 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:43:31.642534 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 20:43:31.642546 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 20:43:31.642564 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 20:43:31.642577 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 20:43:31.642589 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:43:31.642601 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 20:43:31.642614 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 20:43:31.642626 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 20:43:31.642663 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 20:43:31.642676 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:43:31.642688 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 20:43:31.642700 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 20:43:31.642714 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 20:43:31.642726 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 20:43:31.642739 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:43:31.642751 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 20:43:31.642764 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 20:43:31.642776 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 20:43:31.642789 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 20:43:31.642801 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:43:31.642813 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:43:31.642825 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 20:43:31.642837 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 20:43:31.642848 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 20:43:31.642858 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 20:43:31.642869 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:43:31.642883 | orchestrator | 2025-08-29 20:43:31.642902 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-08-29 20:43:31.642941 | orchestrator | Friday 29 August 2025 20:43:30 +0000 (0:00:01.942) 0:00:42.741 ********* 2025-08-29 20:43:31.642959 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:43:31.642978 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:43:31.642998 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:43:31.643016 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:43:31.643034 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:43:31.643053 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:43:31.643069 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:43:31.643085 | orchestrator | 2025-08-29 20:43:31.643100 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-08-29 20:43:31.643117 | orchestrator | Friday 29 August 2025 20:43:30 +0000 (0:00:00.625) 0:00:43.367 ********* 2025-08-29 20:43:31.643135 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:43:31.643155 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:43:31.643173 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:43:31.643190 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:43:31.643202 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:43:31.643213 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:43:31.643224 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:43:31.643234 | orchestrator | 2025-08-29 20:43:31.643245 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:43:31.643258 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 20:43:31.643271 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:43:31.643282 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:43:31.643293 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:43:31.643315 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:43:31.643326 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:43:31.643337 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 20:43:31.643348 | orchestrator | 2025-08-29 20:43:31.643360 | orchestrator | 2025-08-29 20:43:31.643371 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:43:31.643382 | orchestrator | Friday 29 August 2025 20:43:31 +0000 (0:00:00.669) 0:00:44.036 ********* 2025-08-29 20:43:31.643434 | orchestrator | =============================================================================== 2025-08-29 20:43:31.643454 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.97s 2025-08-29 20:43:31.643472 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.84s 2025-08-29 20:43:31.643489 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.43s 2025-08-29 20:43:31.643506 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.29s 2025-08-29 20:43:31.643523 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.02s 2025-08-29 20:43:31.643540 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.94s 2025-08-29 20:43:31.643559 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.88s 2025-08-29 20:43:31.643577 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.77s 2025-08-29 20:43:31.643596 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.70s 2025-08-29 20:43:31.643611 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.61s 2025-08-29 20:43:31.643622 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.45s 2025-08-29 20:43:31.643633 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.21s 2025-08-29 20:43:31.643644 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.20s 2025-08-29 20:43:31.643654 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.17s 2025-08-29 20:43:31.643665 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.16s 2025-08-29 20:43:31.643676 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2025-08-29 20:43:31.643687 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.13s 2025-08-29 20:43:31.643698 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2025-08-29 20:43:31.643708 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.94s 2025-08-29 20:43:31.643719 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.88s 2025-08-29 20:43:31.886887 | orchestrator | + osism apply wireguard 2025-08-29 20:43:43.731713 | orchestrator | 2025-08-29 20:43:43 | INFO  | Task aa5bc8a1-a9f0-42d0-9514-ed92fa1d4fd5 (wireguard) was prepared for execution. 2025-08-29 20:43:43.731828 | orchestrator | 2025-08-29 20:43:43 | INFO  | It takes a moment until task aa5bc8a1-a9f0-42d0-9514-ed92fa1d4fd5 (wireguard) has been started and output is visible here. 2025-08-29 20:44:01.489048 | orchestrator | 2025-08-29 20:44:01.489146 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-08-29 20:44:01.489163 | orchestrator | 2025-08-29 20:44:01.489176 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-08-29 20:44:01.489189 | orchestrator | Friday 29 August 2025 20:43:47 +0000 (0:00:00.217) 0:00:00.217 ********* 2025-08-29 20:44:01.489200 | orchestrator | ok: [testbed-manager] 2025-08-29 20:44:01.489235 | orchestrator | 2025-08-29 20:44:01.489248 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-08-29 20:44:01.489259 | orchestrator | Friday 29 August 2025 20:43:49 +0000 (0:00:01.421) 0:00:01.639 ********* 2025-08-29 20:44:01.489270 | orchestrator | changed: [testbed-manager] 2025-08-29 20:44:01.489282 | orchestrator | 2025-08-29 20:44:01.489294 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-08-29 20:44:01.489305 | orchestrator | Friday 29 August 2025 20:43:54 +0000 (0:00:05.684) 0:00:07.324 ********* 2025-08-29 20:44:01.489317 | orchestrator | changed: [testbed-manager] 2025-08-29 20:44:01.489328 | orchestrator | 2025-08-29 20:44:01.489339 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-08-29 20:44:01.489351 | orchestrator | Friday 29 August 2025 20:43:55 +0000 (0:00:00.491) 0:00:07.815 ********* 2025-08-29 20:44:01.489362 | orchestrator | changed: [testbed-manager] 2025-08-29 20:44:01.489373 | orchestrator | 2025-08-29 20:44:01.489384 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-08-29 20:44:01.489395 | orchestrator | Friday 29 August 2025 20:43:55 +0000 (0:00:00.396) 0:00:08.212 ********* 2025-08-29 20:44:01.489407 | orchestrator | ok: [testbed-manager] 2025-08-29 20:44:01.489418 | orchestrator | 2025-08-29 20:44:01.489466 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-08-29 20:44:01.489479 | orchestrator | Friday 29 August 2025 20:43:56 +0000 (0:00:00.484) 0:00:08.697 ********* 2025-08-29 20:44:01.489490 | orchestrator | ok: [testbed-manager] 2025-08-29 20:44:01.489501 | orchestrator | 2025-08-29 20:44:01.489512 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-08-29 20:44:01.489523 | orchestrator | Friday 29 August 2025 20:43:56 +0000 (0:00:00.463) 0:00:09.160 ********* 2025-08-29 20:44:01.489534 | orchestrator | ok: [testbed-manager] 2025-08-29 20:44:01.489545 | orchestrator | 2025-08-29 20:44:01.489556 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-08-29 20:44:01.489567 | orchestrator | Friday 29 August 2025 20:43:56 +0000 (0:00:00.376) 0:00:09.537 ********* 2025-08-29 20:44:01.489578 | orchestrator | changed: [testbed-manager] 2025-08-29 20:44:01.489589 | orchestrator | 2025-08-29 20:44:01.489600 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-08-29 20:44:01.489614 | orchestrator | Friday 29 August 2025 20:43:58 +0000 (0:00:01.034) 0:00:10.572 ********* 2025-08-29 20:44:01.489627 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 20:44:01.489640 | orchestrator | changed: [testbed-manager] 2025-08-29 20:44:01.489653 | orchestrator | 2025-08-29 20:44:01.489665 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-08-29 20:44:01.489678 | orchestrator | Friday 29 August 2025 20:43:58 +0000 (0:00:00.752) 0:00:11.324 ********* 2025-08-29 20:44:01.489689 | orchestrator | changed: [testbed-manager] 2025-08-29 20:44:01.489700 | orchestrator | 2025-08-29 20:44:01.489724 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-08-29 20:44:01.489736 | orchestrator | Friday 29 August 2025 20:44:00 +0000 (0:00:01.515) 0:00:12.840 ********* 2025-08-29 20:44:01.489747 | orchestrator | changed: [testbed-manager] 2025-08-29 20:44:01.489758 | orchestrator | 2025-08-29 20:44:01.489769 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:44:01.489780 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:44:01.489792 | orchestrator | 2025-08-29 20:44:01.489803 | orchestrator | 2025-08-29 20:44:01.489814 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:44:01.489825 | orchestrator | Friday 29 August 2025 20:44:01 +0000 (0:00:00.956) 0:00:13.796 ********* 2025-08-29 20:44:01.489835 | orchestrator | =============================================================================== 2025-08-29 20:44:01.489846 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.68s 2025-08-29 20:44:01.489857 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.52s 2025-08-29 20:44:01.489876 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.42s 2025-08-29 20:44:01.489886 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.03s 2025-08-29 20:44:01.489897 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-08-29 20:44:01.489908 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.75s 2025-08-29 20:44:01.489919 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.49s 2025-08-29 20:44:01.489929 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.48s 2025-08-29 20:44:01.489940 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2025-08-29 20:44:01.489951 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2025-08-29 20:44:01.489961 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.38s 2025-08-29 20:44:01.689198 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-08-29 20:44:01.722375 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-08-29 20:44:01.722497 | orchestrator | Dload Upload Total Spent Left Speed 2025-08-29 20:44:01.806899 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 165 0 --:--:-- --:--:-- --:--:-- 166 2025-08-29 20:44:01.818169 | orchestrator | + osism apply --environment custom workarounds 2025-08-29 20:44:03.416394 | orchestrator | 2025-08-29 20:44:03 | INFO  | Trying to run play workarounds in environment custom 2025-08-29 20:44:13.493257 | orchestrator | 2025-08-29 20:44:13 | INFO  | Task 7d76e577-cf2c-400a-a314-b5a44df9c6f0 (workarounds) was prepared for execution. 2025-08-29 20:44:13.493372 | orchestrator | 2025-08-29 20:44:13 | INFO  | It takes a moment until task 7d76e577-cf2c-400a-a314-b5a44df9c6f0 (workarounds) has been started and output is visible here. 2025-08-29 20:44:38.242368 | orchestrator | 2025-08-29 20:44:38.242525 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 20:44:38.242547 | orchestrator | 2025-08-29 20:44:38.242559 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-08-29 20:44:38.242572 | orchestrator | Friday 29 August 2025 20:44:17 +0000 (0:00:00.142) 0:00:00.142 ********* 2025-08-29 20:44:38.242584 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-08-29 20:44:38.242596 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-08-29 20:44:38.242607 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-08-29 20:44:38.242618 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-08-29 20:44:38.242629 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-08-29 20:44:38.242640 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-08-29 20:44:38.242651 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-08-29 20:44:38.242662 | orchestrator | 2025-08-29 20:44:38.242673 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-08-29 20:44:38.242684 | orchestrator | 2025-08-29 20:44:38.242695 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 20:44:38.242706 | orchestrator | Friday 29 August 2025 20:44:18 +0000 (0:00:00.745) 0:00:00.888 ********* 2025-08-29 20:44:38.242717 | orchestrator | ok: [testbed-manager] 2025-08-29 20:44:38.242729 | orchestrator | 2025-08-29 20:44:38.242740 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-08-29 20:44:38.242751 | orchestrator | 2025-08-29 20:44:38.242762 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 20:44:38.242773 | orchestrator | Friday 29 August 2025 20:44:20 +0000 (0:00:02.317) 0:00:03.205 ********* 2025-08-29 20:44:38.242809 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:44:38.242821 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:44:38.242832 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:44:38.242842 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:44:38.242853 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:44:38.242864 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:44:38.242875 | orchestrator | 2025-08-29 20:44:38.242885 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-08-29 20:44:38.242896 | orchestrator | 2025-08-29 20:44:38.242908 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-08-29 20:44:38.242936 | orchestrator | Friday 29 August 2025 20:44:22 +0000 (0:00:01.843) 0:00:05.048 ********* 2025-08-29 20:44:38.242950 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 20:44:38.242964 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 20:44:38.242985 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 20:44:38.243004 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 20:44:38.243024 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 20:44:38.243044 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 20:44:38.243064 | orchestrator | 2025-08-29 20:44:38.243081 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-08-29 20:44:38.243099 | orchestrator | Friday 29 August 2025 20:44:23 +0000 (0:00:01.482) 0:00:06.531 ********* 2025-08-29 20:44:38.243116 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:44:38.243134 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:44:38.243151 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:44:38.243170 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:44:38.243189 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:44:38.243207 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:44:38.243226 | orchestrator | 2025-08-29 20:44:38.243246 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-08-29 20:44:38.243258 | orchestrator | Friday 29 August 2025 20:44:27 +0000 (0:00:03.918) 0:00:10.450 ********* 2025-08-29 20:44:38.243269 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:44:38.243280 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:44:38.243291 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:44:38.243309 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:44:38.243326 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:44:38.243343 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:44:38.243361 | orchestrator | 2025-08-29 20:44:38.243381 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-08-29 20:44:38.243392 | orchestrator | 2025-08-29 20:44:38.243403 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-08-29 20:44:38.243414 | orchestrator | Friday 29 August 2025 20:44:28 +0000 (0:00:00.639) 0:00:11.090 ********* 2025-08-29 20:44:38.243425 | orchestrator | changed: [testbed-manager] 2025-08-29 20:44:38.243436 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:44:38.243447 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:44:38.243458 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:44:38.243509 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:44:38.243522 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:44:38.243532 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:44:38.243543 | orchestrator | 2025-08-29 20:44:38.243555 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-08-29 20:44:38.243565 | orchestrator | Friday 29 August 2025 20:44:29 +0000 (0:00:01.625) 0:00:12.715 ********* 2025-08-29 20:44:38.243576 | orchestrator | changed: [testbed-manager] 2025-08-29 20:44:38.243600 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:44:38.243611 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:44:38.243622 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:44:38.243633 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:44:38.243643 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:44:38.243675 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:44:38.243688 | orchestrator | 2025-08-29 20:44:38.243699 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-08-29 20:44:38.243710 | orchestrator | Friday 29 August 2025 20:44:31 +0000 (0:00:01.583) 0:00:14.299 ********* 2025-08-29 20:44:38.243721 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:44:38.243732 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:44:38.243743 | orchestrator | ok: [testbed-manager] 2025-08-29 20:44:38.243753 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:44:38.243764 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:44:38.243775 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:44:38.243786 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:44:38.243797 | orchestrator | 2025-08-29 20:44:38.243808 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-08-29 20:44:38.243819 | orchestrator | Friday 29 August 2025 20:44:32 +0000 (0:00:01.500) 0:00:15.800 ********* 2025-08-29 20:44:38.243830 | orchestrator | changed: [testbed-manager] 2025-08-29 20:44:38.243841 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:44:38.243851 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:44:38.243864 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:44:38.243881 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:44:38.243900 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:44:38.243918 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:44:38.243929 | orchestrator | 2025-08-29 20:44:38.243940 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-08-29 20:44:38.243951 | orchestrator | Friday 29 August 2025 20:44:34 +0000 (0:00:01.761) 0:00:17.561 ********* 2025-08-29 20:44:38.243962 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:44:38.243973 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:44:38.243984 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:44:38.243994 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:44:38.244005 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:44:38.244016 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:44:38.244027 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:44:38.244038 | orchestrator | 2025-08-29 20:44:38.244049 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-08-29 20:44:38.244060 | orchestrator | 2025-08-29 20:44:38.244086 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-08-29 20:44:38.244109 | orchestrator | Friday 29 August 2025 20:44:35 +0000 (0:00:00.611) 0:00:18.173 ********* 2025-08-29 20:44:38.244120 | orchestrator | ok: [testbed-manager] 2025-08-29 20:44:38.244131 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:44:38.244141 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:44:38.244152 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:44:38.244163 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:44:38.244174 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:44:38.244194 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:44:38.244205 | orchestrator | 2025-08-29 20:44:38.244216 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:44:38.244228 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:44:38.244241 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:44:38.244252 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:44:38.244263 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:44:38.244285 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:44:38.244296 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:44:38.244307 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:44:38.244318 | orchestrator | 2025-08-29 20:44:38.244328 | orchestrator | 2025-08-29 20:44:38.244339 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:44:38.244350 | orchestrator | Friday 29 August 2025 20:44:38 +0000 (0:00:02.836) 0:00:21.009 ********* 2025-08-29 20:44:38.244361 | orchestrator | =============================================================================== 2025-08-29 20:44:38.244372 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.92s 2025-08-29 20:44:38.244383 | orchestrator | Install python3-docker -------------------------------------------------- 2.84s 2025-08-29 20:44:38.244394 | orchestrator | Apply netplan configuration --------------------------------------------- 2.32s 2025-08-29 20:44:38.244405 | orchestrator | Apply netplan configuration --------------------------------------------- 1.84s 2025-08-29 20:44:38.244415 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.76s 2025-08-29 20:44:38.244426 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.63s 2025-08-29 20:44:38.244437 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.58s 2025-08-29 20:44:38.244448 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.50s 2025-08-29 20:44:38.244458 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.48s 2025-08-29 20:44:38.244490 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.75s 2025-08-29 20:44:38.244502 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.64s 2025-08-29 20:44:38.244520 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2025-08-29 20:44:38.813040 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-08-29 20:44:50.697992 | orchestrator | 2025-08-29 20:44:50 | INFO  | Task 24717d8e-42a6-4c9b-a0e9-995eb9477e0f (reboot) was prepared for execution. 2025-08-29 20:44:50.698203 | orchestrator | 2025-08-29 20:44:50 | INFO  | It takes a moment until task 24717d8e-42a6-4c9b-a0e9-995eb9477e0f (reboot) has been started and output is visible here. 2025-08-29 20:45:00.373890 | orchestrator | 2025-08-29 20:45:00.374008 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 20:45:00.374080 | orchestrator | 2025-08-29 20:45:00.374093 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 20:45:00.374106 | orchestrator | Friday 29 August 2025 20:44:54 +0000 (0:00:00.206) 0:00:00.206 ********* 2025-08-29 20:45:00.374117 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:45:00.374130 | orchestrator | 2025-08-29 20:45:00.374141 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 20:45:00.374152 | orchestrator | Friday 29 August 2025 20:44:54 +0000 (0:00:00.091) 0:00:00.298 ********* 2025-08-29 20:45:00.374163 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:45:00.374174 | orchestrator | 2025-08-29 20:45:00.374186 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 20:45:00.374197 | orchestrator | Friday 29 August 2025 20:44:55 +0000 (0:00:00.906) 0:00:01.204 ********* 2025-08-29 20:45:00.374207 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:45:00.374218 | orchestrator | 2025-08-29 20:45:00.374229 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 20:45:00.374265 | orchestrator | 2025-08-29 20:45:00.374277 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 20:45:00.374288 | orchestrator | Friday 29 August 2025 20:44:55 +0000 (0:00:00.103) 0:00:01.308 ********* 2025-08-29 20:45:00.374299 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:45:00.374310 | orchestrator | 2025-08-29 20:45:00.374321 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 20:45:00.374332 | orchestrator | Friday 29 August 2025 20:44:55 +0000 (0:00:00.098) 0:00:01.407 ********* 2025-08-29 20:45:00.374342 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:45:00.374353 | orchestrator | 2025-08-29 20:45:00.374364 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 20:45:00.374388 | orchestrator | Friday 29 August 2025 20:44:56 +0000 (0:00:00.643) 0:00:02.051 ********* 2025-08-29 20:45:00.374399 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:45:00.374412 | orchestrator | 2025-08-29 20:45:00.374424 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 20:45:00.374437 | orchestrator | 2025-08-29 20:45:00.374450 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 20:45:00.374462 | orchestrator | Friday 29 August 2025 20:44:56 +0000 (0:00:00.116) 0:00:02.167 ********* 2025-08-29 20:45:00.374475 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:45:00.374508 | orchestrator | 2025-08-29 20:45:00.374521 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 20:45:00.374533 | orchestrator | Friday 29 August 2025 20:44:56 +0000 (0:00:00.176) 0:00:02.344 ********* 2025-08-29 20:45:00.374545 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:45:00.374557 | orchestrator | 2025-08-29 20:45:00.374575 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 20:45:00.374588 | orchestrator | Friday 29 August 2025 20:44:57 +0000 (0:00:00.652) 0:00:02.997 ********* 2025-08-29 20:45:00.374601 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:45:00.374614 | orchestrator | 2025-08-29 20:45:00.374626 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 20:45:00.374637 | orchestrator | 2025-08-29 20:45:00.374648 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 20:45:00.374659 | orchestrator | Friday 29 August 2025 20:44:57 +0000 (0:00:00.108) 0:00:03.106 ********* 2025-08-29 20:45:00.374670 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:45:00.374681 | orchestrator | 2025-08-29 20:45:00.374692 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 20:45:00.374703 | orchestrator | Friday 29 August 2025 20:44:57 +0000 (0:00:00.099) 0:00:03.205 ********* 2025-08-29 20:45:00.374714 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:45:00.374725 | orchestrator | 2025-08-29 20:45:00.374736 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 20:45:00.374747 | orchestrator | Friday 29 August 2025 20:44:58 +0000 (0:00:00.650) 0:00:03.856 ********* 2025-08-29 20:45:00.374758 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:45:00.374769 | orchestrator | 2025-08-29 20:45:00.374780 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 20:45:00.374791 | orchestrator | 2025-08-29 20:45:00.374802 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 20:45:00.374813 | orchestrator | Friday 29 August 2025 20:44:58 +0000 (0:00:00.108) 0:00:03.965 ********* 2025-08-29 20:45:00.374824 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:45:00.374834 | orchestrator | 2025-08-29 20:45:00.374845 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 20:45:00.374856 | orchestrator | Friday 29 August 2025 20:44:58 +0000 (0:00:00.091) 0:00:04.056 ********* 2025-08-29 20:45:00.374867 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:45:00.374878 | orchestrator | 2025-08-29 20:45:00.374889 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 20:45:00.374900 | orchestrator | Friday 29 August 2025 20:44:59 +0000 (0:00:00.646) 0:00:04.702 ********* 2025-08-29 20:45:00.374919 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:45:00.374931 | orchestrator | 2025-08-29 20:45:00.374942 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 20:45:00.374953 | orchestrator | 2025-08-29 20:45:00.374964 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 20:45:00.374975 | orchestrator | Friday 29 August 2025 20:44:59 +0000 (0:00:00.104) 0:00:04.807 ********* 2025-08-29 20:45:00.374986 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:45:00.374997 | orchestrator | 2025-08-29 20:45:00.375008 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 20:45:00.375019 | orchestrator | Friday 29 August 2025 20:44:59 +0000 (0:00:00.098) 0:00:04.906 ********* 2025-08-29 20:45:00.375030 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:45:00.375041 | orchestrator | 2025-08-29 20:45:00.375052 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 20:45:00.375063 | orchestrator | Friday 29 August 2025 20:45:00 +0000 (0:00:00.659) 0:00:05.566 ********* 2025-08-29 20:45:00.375092 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:45:00.375103 | orchestrator | 2025-08-29 20:45:00.375115 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:45:00.375127 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:45:00.375139 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:45:00.375150 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:45:00.375161 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:45:00.375172 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:45:00.375183 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:45:00.375194 | orchestrator | 2025-08-29 20:45:00.375205 | orchestrator | 2025-08-29 20:45:00.375217 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:45:00.375228 | orchestrator | Friday 29 August 2025 20:45:00 +0000 (0:00:00.038) 0:00:05.604 ********* 2025-08-29 20:45:00.375239 | orchestrator | =============================================================================== 2025-08-29 20:45:00.375250 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.16s 2025-08-29 20:45:00.375261 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.66s 2025-08-29 20:45:00.375272 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2025-08-29 20:45:00.610870 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-08-29 20:45:12.504364 | orchestrator | 2025-08-29 20:45:12 | INFO  | Task 750c677a-736c-4729-b355-2e16ba00514f (wait-for-connection) was prepared for execution. 2025-08-29 20:45:12.504479 | orchestrator | 2025-08-29 20:45:12 | INFO  | It takes a moment until task 750c677a-736c-4729-b355-2e16ba00514f (wait-for-connection) has been started and output is visible here. 2025-08-29 20:45:28.245003 | orchestrator | 2025-08-29 20:45:28.245105 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-08-29 20:45:28.245116 | orchestrator | 2025-08-29 20:45:28.245123 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-08-29 20:45:28.245130 | orchestrator | Friday 29 August 2025 20:45:16 +0000 (0:00:00.233) 0:00:00.233 ********* 2025-08-29 20:45:28.245163 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:45:28.245170 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:45:28.245176 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:45:28.245182 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:45:28.245188 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:45:28.245194 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:45:28.245199 | orchestrator | 2025-08-29 20:45:28.245205 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:45:28.245212 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:45:28.245237 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:45:28.245244 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:45:28.245251 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:45:28.245257 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:45:28.245263 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:45:28.245268 | orchestrator | 2025-08-29 20:45:28.245274 | orchestrator | 2025-08-29 20:45:28.245279 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:45:28.245286 | orchestrator | Friday 29 August 2025 20:45:27 +0000 (0:00:11.618) 0:00:11.852 ********* 2025-08-29 20:45:28.245292 | orchestrator | =============================================================================== 2025-08-29 20:45:28.245298 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.62s 2025-08-29 20:45:28.516599 | orchestrator | + osism apply hddtemp 2025-08-29 20:45:40.390234 | orchestrator | 2025-08-29 20:45:40 | INFO  | Task 89009dd7-1d8a-4f7b-8862-c6598f239578 (hddtemp) was prepared for execution. 2025-08-29 20:45:40.390313 | orchestrator | 2025-08-29 20:45:40 | INFO  | It takes a moment until task 89009dd7-1d8a-4f7b-8862-c6598f239578 (hddtemp) has been started and output is visible here. 2025-08-29 20:46:07.032837 | orchestrator | 2025-08-29 20:46:07.032962 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-08-29 20:46:07.032980 | orchestrator | 2025-08-29 20:46:07.032992 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-08-29 20:46:07.033003 | orchestrator | Friday 29 August 2025 20:45:44 +0000 (0:00:00.234) 0:00:00.234 ********* 2025-08-29 20:46:07.033015 | orchestrator | ok: [testbed-manager] 2025-08-29 20:46:07.033027 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:46:07.033038 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:46:07.033050 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:46:07.033061 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:46:07.033072 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:46:07.033083 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:46:07.033094 | orchestrator | 2025-08-29 20:46:07.033105 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-08-29 20:46:07.033117 | orchestrator | Friday 29 August 2025 20:45:44 +0000 (0:00:00.483) 0:00:00.717 ********* 2025-08-29 20:46:07.033130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:46:07.033143 | orchestrator | 2025-08-29 20:46:07.033155 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-08-29 20:46:07.033166 | orchestrator | Friday 29 August 2025 20:45:45 +0000 (0:00:00.850) 0:00:01.567 ********* 2025-08-29 20:46:07.033177 | orchestrator | ok: [testbed-manager] 2025-08-29 20:46:07.033214 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:46:07.033226 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:46:07.033237 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:46:07.033248 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:46:07.033258 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:46:07.033270 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:46:07.033281 | orchestrator | 2025-08-29 20:46:07.033293 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-08-29 20:46:07.033320 | orchestrator | Friday 29 August 2025 20:45:47 +0000 (0:00:01.959) 0:00:03.527 ********* 2025-08-29 20:46:07.033331 | orchestrator | changed: [testbed-manager] 2025-08-29 20:46:07.033343 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:46:07.033354 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:46:07.033365 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:46:07.033378 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:46:07.033392 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:46:07.033405 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:46:07.033418 | orchestrator | 2025-08-29 20:46:07.033431 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-08-29 20:46:07.033444 | orchestrator | Friday 29 August 2025 20:45:48 +0000 (0:00:01.013) 0:00:04.540 ********* 2025-08-29 20:46:07.033457 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:46:07.033470 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:46:07.033483 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:46:07.033497 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:46:07.033509 | orchestrator | ok: [testbed-manager] 2025-08-29 20:46:07.033545 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:46:07.033559 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:46:07.033572 | orchestrator | 2025-08-29 20:46:07.033585 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-08-29 20:46:07.033599 | orchestrator | Friday 29 August 2025 20:45:49 +0000 (0:00:01.069) 0:00:05.610 ********* 2025-08-29 20:46:07.033612 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:46:07.033625 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:46:07.033638 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:46:07.033650 | orchestrator | changed: [testbed-manager] 2025-08-29 20:46:07.033663 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:46:07.033676 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:46:07.033689 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:46:07.033702 | orchestrator | 2025-08-29 20:46:07.033715 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-08-29 20:46:07.033728 | orchestrator | Friday 29 August 2025 20:45:50 +0000 (0:00:00.647) 0:00:06.258 ********* 2025-08-29 20:46:07.033740 | orchestrator | changed: [testbed-manager] 2025-08-29 20:46:07.033751 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:46:07.033762 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:46:07.033773 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:46:07.033784 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:46:07.033795 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:46:07.033806 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:46:07.033817 | orchestrator | 2025-08-29 20:46:07.033828 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-08-29 20:46:07.033839 | orchestrator | Friday 29 August 2025 20:46:03 +0000 (0:00:13.271) 0:00:19.529 ********* 2025-08-29 20:46:07.033851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:46:07.033863 | orchestrator | 2025-08-29 20:46:07.033874 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-08-29 20:46:07.033885 | orchestrator | Friday 29 August 2025 20:46:04 +0000 (0:00:01.308) 0:00:20.838 ********* 2025-08-29 20:46:07.033896 | orchestrator | changed: [testbed-manager] 2025-08-29 20:46:07.033907 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:46:07.033928 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:46:07.033939 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:46:07.033951 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:46:07.033962 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:46:07.033973 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:46:07.033984 | orchestrator | 2025-08-29 20:46:07.033995 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:46:07.034007 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:46:07.034096 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:46:07.034110 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:46:07.034121 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:46:07.034132 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:46:07.034143 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:46:07.034154 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:46:07.034165 | orchestrator | 2025-08-29 20:46:07.034177 | orchestrator | 2025-08-29 20:46:07.034188 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:46:07.034199 | orchestrator | Friday 29 August 2025 20:46:06 +0000 (0:00:01.753) 0:00:22.592 ********* 2025-08-29 20:46:07.034210 | orchestrator | =============================================================================== 2025-08-29 20:46:07.034221 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.27s 2025-08-29 20:46:07.034232 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.96s 2025-08-29 20:46:07.034243 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.75s 2025-08-29 20:46:07.034261 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.31s 2025-08-29 20:46:07.034272 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.07s 2025-08-29 20:46:07.034283 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.01s 2025-08-29 20:46:07.034294 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.85s 2025-08-29 20:46:07.034305 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.65s 2025-08-29 20:46:07.034317 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.48s 2025-08-29 20:46:07.263758 | orchestrator | ++ semver 9.2.0 7.1.1 2025-08-29 20:46:07.318556 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 20:46:07.318639 | orchestrator | + sudo systemctl restart manager.service 2025-08-29 20:46:20.623715 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 20:46:20.623801 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 20:46:20.623810 | orchestrator | + local max_attempts=60 2025-08-29 20:46:20.623818 | orchestrator | + local name=ceph-ansible 2025-08-29 20:46:20.623825 | orchestrator | + local attempt_num=1 2025-08-29 20:46:20.623831 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:46:20.663928 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:46:20.664043 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:46:20.664060 | orchestrator | + sleep 5 2025-08-29 20:46:25.669140 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:46:25.704132 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:46:25.704201 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:46:25.704215 | orchestrator | + sleep 5 2025-08-29 20:46:30.707657 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:46:30.748318 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:46:30.748388 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:46:30.748400 | orchestrator | + sleep 5 2025-08-29 20:46:35.752889 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:46:35.795224 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:46:35.795285 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:46:35.795299 | orchestrator | + sleep 5 2025-08-29 20:46:40.800284 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:46:40.835721 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:46:40.835800 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:46:40.835815 | orchestrator | + sleep 5 2025-08-29 20:46:45.839662 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:46:45.878502 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:46:45.878581 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:46:45.878598 | orchestrator | + sleep 5 2025-08-29 20:46:50.883349 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:46:50.916389 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:46:50.916430 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:46:50.916797 | orchestrator | + sleep 5 2025-08-29 20:46:55.919402 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:46:55.956709 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 20:46:55.956776 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:46:55.956790 | orchestrator | + sleep 5 2025-08-29 20:47:00.959450 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:47:01.008570 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 20:47:01.008619 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:47:01.008625 | orchestrator | + sleep 5 2025-08-29 20:47:06.011882 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:47:06.046117 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 20:47:06.046182 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:47:06.046197 | orchestrator | + sleep 5 2025-08-29 20:47:11.050251 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:47:11.093470 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 20:47:11.093545 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:47:11.093560 | orchestrator | + sleep 5 2025-08-29 20:47:16.098219 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:47:16.138897 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 20:47:16.138958 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:47:16.138973 | orchestrator | + sleep 5 2025-08-29 20:47:21.143766 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:47:21.180605 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 20:47:21.180696 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 20:47:21.180712 | orchestrator | + sleep 5 2025-08-29 20:47:26.186386 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 20:47:26.225950 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:47:26.226138 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 20:47:26.226157 | orchestrator | + local max_attempts=60 2025-08-29 20:47:26.226169 | orchestrator | + local name=kolla-ansible 2025-08-29 20:47:26.226181 | orchestrator | + local attempt_num=1 2025-08-29 20:47:26.226201 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 20:47:26.262826 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:47:26.262852 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 20:47:26.262864 | orchestrator | + local max_attempts=60 2025-08-29 20:47:26.262876 | orchestrator | + local name=osism-ansible 2025-08-29 20:47:26.262886 | orchestrator | + local attempt_num=1 2025-08-29 20:47:26.264119 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 20:47:26.306554 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 20:47:26.306653 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 20:47:26.306698 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 20:47:26.485006 | orchestrator | ARA in ceph-ansible already disabled. 2025-08-29 20:47:26.639763 | orchestrator | ARA in kolla-ansible already disabled. 2025-08-29 20:47:26.783986 | orchestrator | ARA in osism-ansible already disabled. 2025-08-29 20:47:26.938271 | orchestrator | ARA in osism-kubernetes already disabled. 2025-08-29 20:47:26.938365 | orchestrator | + osism apply gather-facts 2025-08-29 20:47:38.735889 | orchestrator | 2025-08-29 20:47:38 | INFO  | Task d68d021e-788d-484a-9e8a-9be41118045b (gather-facts) was prepared for execution. 2025-08-29 20:47:38.735984 | orchestrator | 2025-08-29 20:47:38 | INFO  | It takes a moment until task d68d021e-788d-484a-9e8a-9be41118045b (gather-facts) has been started and output is visible here. 2025-08-29 20:47:51.017501 | orchestrator | 2025-08-29 20:47:51.017648 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 20:47:51.017668 | orchestrator | 2025-08-29 20:47:51.017681 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 20:47:51.017708 | orchestrator | Friday 29 August 2025 20:47:42 +0000 (0:00:00.163) 0:00:00.163 ********* 2025-08-29 20:47:51.017720 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:47:51.017732 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:47:51.017743 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:47:51.017754 | orchestrator | ok: [testbed-manager] 2025-08-29 20:47:51.017765 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:47:51.017776 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:47:51.017787 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:47:51.017797 | orchestrator | 2025-08-29 20:47:51.017808 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 20:47:51.017819 | orchestrator | 2025-08-29 20:47:51.017830 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 20:47:51.017841 | orchestrator | Friday 29 August 2025 20:47:50 +0000 (0:00:08.198) 0:00:08.362 ********* 2025-08-29 20:47:51.017852 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:47:51.017863 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:47:51.017873 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:47:51.017884 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:47:51.017895 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:47:51.017906 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:47:51.017916 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:47:51.017927 | orchestrator | 2025-08-29 20:47:51.017938 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:47:51.017949 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:47:51.017961 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:47:51.017972 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:47:51.017983 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:47:51.017993 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:47:51.018004 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:47:51.018064 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:47:51.018079 | orchestrator | 2025-08-29 20:47:51.018092 | orchestrator | 2025-08-29 20:47:51.018105 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:47:51.018118 | orchestrator | Friday 29 August 2025 20:47:50 +0000 (0:00:00.457) 0:00:08.820 ********* 2025-08-29 20:47:51.018151 | orchestrator | =============================================================================== 2025-08-29 20:47:51.018164 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.20s 2025-08-29 20:47:51.018176 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-08-29 20:47:51.190527 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-08-29 20:47:51.202773 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-08-29 20:47:51.212567 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-08-29 20:47:51.230535 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-08-29 20:47:51.243333 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-08-29 20:47:51.253460 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-08-29 20:47:51.262812 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-08-29 20:47:51.274485 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-08-29 20:47:51.283277 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-08-29 20:47:51.292755 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-08-29 20:47:51.305559 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-08-29 20:47:51.315395 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-08-29 20:47:51.326219 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-08-29 20:47:51.335461 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-08-29 20:47:51.345356 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-08-29 20:47:51.355235 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-08-29 20:47:51.375088 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-08-29 20:47:51.390636 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-08-29 20:47:51.413906 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-08-29 20:47:51.427277 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-08-29 20:47:51.438115 | orchestrator | + [[ false == \t\r\u\e ]] 2025-08-29 20:47:51.547577 | orchestrator | ok: Runtime: 0:22:39.753609 2025-08-29 20:47:51.662557 | 2025-08-29 20:47:51.662709 | TASK [Deploy services] 2025-08-29 20:47:52.194123 | orchestrator | skipping: Conditional result was False 2025-08-29 20:47:52.216509 | 2025-08-29 20:47:52.216679 | TASK [Deploy in a nutshell] 2025-08-29 20:47:52.854471 | orchestrator | + set -e 2025-08-29 20:47:52.854671 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 20:47:52.854692 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 20:47:52.854713 | orchestrator | ++ INTERACTIVE=false 2025-08-29 20:47:52.854725 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 20:47:52.854737 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 20:47:52.854761 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 20:47:52.854805 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 20:47:52.854830 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 20:47:52.854843 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 20:47:52.854857 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 20:47:52.854868 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 20:47:52.854884 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 20:47:52.854895 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 20:47:52.854914 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 20:47:52.854924 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 20:47:52.854936 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 20:47:52.854946 | orchestrator | ++ export ARA=false 2025-08-29 20:47:52.854956 | orchestrator | ++ ARA=false 2025-08-29 20:47:52.854970 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 20:47:52.854982 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 20:47:52.854992 | orchestrator | ++ export TEMPEST=false 2025-08-29 20:47:52.855001 | orchestrator | ++ TEMPEST=false 2025-08-29 20:47:52.855011 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 20:47:52.855021 | orchestrator | ++ IS_ZUUL=true 2025-08-29 20:47:52.855031 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-08-29 20:47:52.855041 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-08-29 20:47:52.855051 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 20:47:52.855061 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 20:47:52.855070 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 20:47:52.855080 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 20:47:52.855090 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 20:47:52.855100 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 20:47:52.855110 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 20:47:52.855120 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 20:47:52.855133 | orchestrator | + echo 2025-08-29 20:47:52.855143 | orchestrator | 2025-08-29 20:47:52.855153 | orchestrator | # PULL IMAGES 2025-08-29 20:47:52.855162 | orchestrator | 2025-08-29 20:47:52.855175 | orchestrator | + echo '# PULL IMAGES' 2025-08-29 20:47:52.855186 | orchestrator | + echo 2025-08-29 20:47:52.856553 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 20:47:52.908684 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 20:47:52.908783 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-08-29 20:47:54.605994 | orchestrator | 2025-08-29 20:47:54 | INFO  | Trying to run play pull-images in environment custom 2025-08-29 20:48:04.705348 | orchestrator | 2025-08-29 20:48:04 | INFO  | Task 0866789d-d5a1-4989-a1e4-116c7e101f73 (pull-images) was prepared for execution. 2025-08-29 20:48:04.705479 | orchestrator | 2025-08-29 20:48:04 | INFO  | Task 0866789d-d5a1-4989-a1e4-116c7e101f73 is running in background. No more output. Check ARA for logs. 2025-08-29 20:48:06.789648 | orchestrator | 2025-08-29 20:48:06 | INFO  | Trying to run play wipe-partitions in environment custom 2025-08-29 20:48:16.968030 | orchestrator | 2025-08-29 20:48:16 | INFO  | Task 5ec4a27a-f75f-4b90-8fa9-2ae55e4784a0 (wipe-partitions) was prepared for execution. 2025-08-29 20:48:16.968132 | orchestrator | 2025-08-29 20:48:16 | INFO  | It takes a moment until task 5ec4a27a-f75f-4b90-8fa9-2ae55e4784a0 (wipe-partitions) has been started and output is visible here. 2025-08-29 20:48:30.141373 | orchestrator | 2025-08-29 20:48:30.141488 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-08-29 20:48:30.141505 | orchestrator | 2025-08-29 20:48:30.141517 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-08-29 20:48:30.141534 | orchestrator | Friday 29 August 2025 20:48:21 +0000 (0:00:00.125) 0:00:00.125 ********* 2025-08-29 20:48:30.141548 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:48:30.141560 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:48:30.141571 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:48:30.141582 | orchestrator | 2025-08-29 20:48:30.141593 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-08-29 20:48:30.141628 | orchestrator | Friday 29 August 2025 20:48:21 +0000 (0:00:00.590) 0:00:00.716 ********* 2025-08-29 20:48:30.141640 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:48:30.141652 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:48:30.141662 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:48:30.141728 | orchestrator | 2025-08-29 20:48:30.141743 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-08-29 20:48:30.141753 | orchestrator | Friday 29 August 2025 20:48:22 +0000 (0:00:00.231) 0:00:00.947 ********* 2025-08-29 20:48:30.141764 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:48:30.141776 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:48:30.141786 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:48:30.141797 | orchestrator | 2025-08-29 20:48:30.141808 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-08-29 20:48:30.141819 | orchestrator | Friday 29 August 2025 20:48:22 +0000 (0:00:00.625) 0:00:01.573 ********* 2025-08-29 20:48:30.141829 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:48:30.141840 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:48:30.141850 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:48:30.141861 | orchestrator | 2025-08-29 20:48:30.141871 | orchestrator | TASK [Check device availability] *********************************************** 2025-08-29 20:48:30.141882 | orchestrator | Friday 29 August 2025 20:48:22 +0000 (0:00:00.207) 0:00:01.781 ********* 2025-08-29 20:48:30.141895 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 20:48:30.141913 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 20:48:30.141926 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 20:48:30.141938 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 20:48:30.141950 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 20:48:30.141962 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 20:48:30.141974 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 20:48:30.141986 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 20:48:30.141998 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 20:48:30.142010 | orchestrator | 2025-08-29 20:48:30.142098 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-08-29 20:48:30.142117 | orchestrator | Friday 29 August 2025 20:48:25 +0000 (0:00:02.097) 0:00:03.878 ********* 2025-08-29 20:48:30.142138 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 20:48:30.142158 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 20:48:30.142178 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 20:48:30.142191 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 20:48:30.142203 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 20:48:30.142216 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 20:48:30.142228 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 20:48:30.142241 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 20:48:30.142252 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 20:48:30.142262 | orchestrator | 2025-08-29 20:48:30.142273 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-08-29 20:48:30.142284 | orchestrator | Friday 29 August 2025 20:48:26 +0000 (0:00:01.323) 0:00:05.202 ********* 2025-08-29 20:48:30.142294 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 20:48:30.142305 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 20:48:30.142316 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 20:48:30.142326 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 20:48:30.142337 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 20:48:30.142356 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 20:48:30.142367 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 20:48:30.142377 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 20:48:30.142398 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 20:48:30.142409 | orchestrator | 2025-08-29 20:48:30.142419 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-08-29 20:48:30.142430 | orchestrator | Friday 29 August 2025 20:48:28 +0000 (0:00:02.243) 0:00:07.446 ********* 2025-08-29 20:48:30.142441 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:48:30.142451 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:48:30.142462 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:48:30.142472 | orchestrator | 2025-08-29 20:48:30.142483 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-08-29 20:48:30.142494 | orchestrator | Friday 29 August 2025 20:48:29 +0000 (0:00:00.589) 0:00:08.036 ********* 2025-08-29 20:48:30.142505 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:48:30.142515 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:48:30.142526 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:48:30.142536 | orchestrator | 2025-08-29 20:48:30.142547 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:48:30.142559 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:48:30.142573 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:48:30.142604 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:48:30.142615 | orchestrator | 2025-08-29 20:48:30.142626 | orchestrator | 2025-08-29 20:48:30.142637 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:48:30.142648 | orchestrator | Friday 29 August 2025 20:48:29 +0000 (0:00:00.609) 0:00:08.646 ********* 2025-08-29 20:48:30.142659 | orchestrator | =============================================================================== 2025-08-29 20:48:30.142670 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.24s 2025-08-29 20:48:30.142706 | orchestrator | Check device availability ----------------------------------------------- 2.10s 2025-08-29 20:48:30.142717 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.32s 2025-08-29 20:48:30.142728 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.63s 2025-08-29 20:48:30.142738 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-08-29 20:48:30.142749 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-08-29 20:48:30.142759 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-08-29 20:48:30.142770 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-08-29 20:48:30.142781 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.21s 2025-08-29 20:48:42.195001 | orchestrator | 2025-08-29 20:48:42 | INFO  | Task 8327c2b0-ed02-4d0c-aa13-cc36261ca333 (facts) was prepared for execution. 2025-08-29 20:48:42.195096 | orchestrator | 2025-08-29 20:48:42 | INFO  | It takes a moment until task 8327c2b0-ed02-4d0c-aa13-cc36261ca333 (facts) has been started and output is visible here. 2025-08-29 20:48:53.125038 | orchestrator | 2025-08-29 20:48:53.125166 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 20:48:53.125183 | orchestrator | 2025-08-29 20:48:53.125196 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 20:48:53.125208 | orchestrator | Friday 29 August 2025 20:48:45 +0000 (0:00:00.196) 0:00:00.196 ********* 2025-08-29 20:48:53.125220 | orchestrator | ok: [testbed-manager] 2025-08-29 20:48:53.125232 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:48:53.125243 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:48:53.125254 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:48:53.125292 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:48:53.125304 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:48:53.125314 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:48:53.125326 | orchestrator | 2025-08-29 20:48:53.125340 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 20:48:53.125351 | orchestrator | Friday 29 August 2025 20:48:46 +0000 (0:00:00.925) 0:00:01.122 ********* 2025-08-29 20:48:53.125362 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:48:53.125374 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:48:53.125385 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:48:53.125396 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:48:53.125406 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:48:53.125417 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:48:53.125428 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:48:53.125439 | orchestrator | 2025-08-29 20:48:53.125450 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 20:48:53.125461 | orchestrator | 2025-08-29 20:48:53.125472 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 20:48:53.125483 | orchestrator | Friday 29 August 2025 20:48:47 +0000 (0:00:01.029) 0:00:02.151 ********* 2025-08-29 20:48:53.125494 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:48:53.125505 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:48:53.125516 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:48:53.125528 | orchestrator | ok: [testbed-manager] 2025-08-29 20:48:53.125539 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:48:53.125550 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:48:53.125562 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:48:53.125573 | orchestrator | 2025-08-29 20:48:53.125585 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 20:48:53.125597 | orchestrator | 2025-08-29 20:48:53.125610 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 20:48:53.125638 | orchestrator | Friday 29 August 2025 20:48:52 +0000 (0:00:04.698) 0:00:06.850 ********* 2025-08-29 20:48:53.125651 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:48:53.125663 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:48:53.125675 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:48:53.125687 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:48:53.125699 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:48:53.125711 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:48:53.125750 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:48:53.125763 | orchestrator | 2025-08-29 20:48:53.125776 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:48:53.125790 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:48:53.125804 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:48:53.125817 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:48:53.125829 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:48:53.125841 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:48:53.125854 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:48:53.125866 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:48:53.125879 | orchestrator | 2025-08-29 20:48:53.125891 | orchestrator | 2025-08-29 20:48:53.125903 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:48:53.125928 | orchestrator | Friday 29 August 2025 20:48:52 +0000 (0:00:00.506) 0:00:07.357 ********* 2025-08-29 20:48:53.125939 | orchestrator | =============================================================================== 2025-08-29 20:48:53.125950 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.70s 2025-08-29 20:48:53.125960 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.03s 2025-08-29 20:48:53.125971 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.93s 2025-08-29 20:48:53.125983 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-08-29 20:48:55.214118 | orchestrator | 2025-08-29 20:48:55 | INFO  | Task 7b06c7ce-d2b1-4ef4-90c5-108f26b71296 (ceph-configure-lvm-volumes) was prepared for execution. 2025-08-29 20:48:55.214208 | orchestrator | 2025-08-29 20:48:55 | INFO  | It takes a moment until task 7b06c7ce-d2b1-4ef4-90c5-108f26b71296 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-08-29 20:49:05.279669 | orchestrator | 2025-08-29 20:49:05.279821 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 20:49:05.279839 | orchestrator | 2025-08-29 20:49:05.279851 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 20:49:05.279866 | orchestrator | Friday 29 August 2025 20:48:58 +0000 (0:00:00.238) 0:00:00.238 ********* 2025-08-29 20:49:05.279878 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 20:49:05.279890 | orchestrator | 2025-08-29 20:49:05.279901 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 20:49:05.279912 | orchestrator | Friday 29 August 2025 20:48:59 +0000 (0:00:00.210) 0:00:00.449 ********* 2025-08-29 20:49:05.279924 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:49:05.279936 | orchestrator | 2025-08-29 20:49:05.279947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.279958 | orchestrator | Friday 29 August 2025 20:48:59 +0000 (0:00:00.206) 0:00:00.656 ********* 2025-08-29 20:49:05.279969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 20:49:05.279981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 20:49:05.279992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 20:49:05.280003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 20:49:05.280015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 20:49:05.280026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 20:49:05.280037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 20:49:05.280048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 20:49:05.280059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 20:49:05.280070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 20:49:05.280081 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 20:49:05.280100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 20:49:05.280112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 20:49:05.280123 | orchestrator | 2025-08-29 20:49:05.280134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280145 | orchestrator | Friday 29 August 2025 20:48:59 +0000 (0:00:00.315) 0:00:00.971 ********* 2025-08-29 20:49:05.280156 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.280168 | orchestrator | 2025-08-29 20:49:05.280200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280214 | orchestrator | Friday 29 August 2025 20:49:00 +0000 (0:00:00.335) 0:00:01.307 ********* 2025-08-29 20:49:05.280227 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.280240 | orchestrator | 2025-08-29 20:49:05.280253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280265 | orchestrator | Friday 29 August 2025 20:49:00 +0000 (0:00:00.173) 0:00:01.480 ********* 2025-08-29 20:49:05.280278 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.280290 | orchestrator | 2025-08-29 20:49:05.280303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280316 | orchestrator | Friday 29 August 2025 20:49:00 +0000 (0:00:00.161) 0:00:01.642 ********* 2025-08-29 20:49:05.280329 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.280342 | orchestrator | 2025-08-29 20:49:05.280359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280372 | orchestrator | Friday 29 August 2025 20:49:00 +0000 (0:00:00.162) 0:00:01.805 ********* 2025-08-29 20:49:05.280385 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.280397 | orchestrator | 2025-08-29 20:49:05.280410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280423 | orchestrator | Friday 29 August 2025 20:49:00 +0000 (0:00:00.165) 0:00:01.970 ********* 2025-08-29 20:49:05.280436 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.280448 | orchestrator | 2025-08-29 20:49:05.280461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280473 | orchestrator | Friday 29 August 2025 20:49:00 +0000 (0:00:00.175) 0:00:02.146 ********* 2025-08-29 20:49:05.280486 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.280498 | orchestrator | 2025-08-29 20:49:05.280511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280524 | orchestrator | Friday 29 August 2025 20:49:01 +0000 (0:00:00.177) 0:00:02.324 ********* 2025-08-29 20:49:05.280537 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.280549 | orchestrator | 2025-08-29 20:49:05.280561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280574 | orchestrator | Friday 29 August 2025 20:49:01 +0000 (0:00:00.162) 0:00:02.487 ********* 2025-08-29 20:49:05.280585 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745) 2025-08-29 20:49:05.280597 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745) 2025-08-29 20:49:05.280608 | orchestrator | 2025-08-29 20:49:05.280619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280630 | orchestrator | Friday 29 August 2025 20:49:01 +0000 (0:00:00.367) 0:00:02.854 ********* 2025-08-29 20:49:05.280659 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4) 2025-08-29 20:49:05.280671 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4) 2025-08-29 20:49:05.280682 | orchestrator | 2025-08-29 20:49:05.280693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280704 | orchestrator | Friday 29 August 2025 20:49:01 +0000 (0:00:00.369) 0:00:03.223 ********* 2025-08-29 20:49:05.280715 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf) 2025-08-29 20:49:05.280726 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf) 2025-08-29 20:49:05.280757 | orchestrator | 2025-08-29 20:49:05.280769 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280780 | orchestrator | Friday 29 August 2025 20:49:02 +0000 (0:00:00.477) 0:00:03.701 ********* 2025-08-29 20:49:05.280791 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346) 2025-08-29 20:49:05.280811 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346) 2025-08-29 20:49:05.280822 | orchestrator | 2025-08-29 20:49:05.280833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:05.280844 | orchestrator | Friday 29 August 2025 20:49:02 +0000 (0:00:00.492) 0:00:04.194 ********* 2025-08-29 20:49:05.280855 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 20:49:05.280866 | orchestrator | 2025-08-29 20:49:05.280877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:05.280893 | orchestrator | Friday 29 August 2025 20:49:03 +0000 (0:00:00.529) 0:00:04.723 ********* 2025-08-29 20:49:05.280905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 20:49:05.280916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 20:49:05.280927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 20:49:05.280938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 20:49:05.280949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 20:49:05.280960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 20:49:05.280971 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 20:49:05.280982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 20:49:05.280993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 20:49:05.281004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 20:49:05.281014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 20:49:05.281025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 20:49:05.281036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 20:49:05.281047 | orchestrator | 2025-08-29 20:49:05.281058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:05.281069 | orchestrator | Friday 29 August 2025 20:49:03 +0000 (0:00:00.328) 0:00:05.052 ********* 2025-08-29 20:49:05.281080 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.281091 | orchestrator | 2025-08-29 20:49:05.281102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:05.281113 | orchestrator | Friday 29 August 2025 20:49:03 +0000 (0:00:00.194) 0:00:05.246 ********* 2025-08-29 20:49:05.281124 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.281135 | orchestrator | 2025-08-29 20:49:05.281145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:05.281156 | orchestrator | Friday 29 August 2025 20:49:04 +0000 (0:00:00.198) 0:00:05.445 ********* 2025-08-29 20:49:05.281167 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.281178 | orchestrator | 2025-08-29 20:49:05.281189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:05.281200 | orchestrator | Friday 29 August 2025 20:49:04 +0000 (0:00:00.166) 0:00:05.612 ********* 2025-08-29 20:49:05.281211 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.281222 | orchestrator | 2025-08-29 20:49:05.281233 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:05.281244 | orchestrator | Friday 29 August 2025 20:49:04 +0000 (0:00:00.172) 0:00:05.784 ********* 2025-08-29 20:49:05.281255 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.281266 | orchestrator | 2025-08-29 20:49:05.281277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:05.281294 | orchestrator | Friday 29 August 2025 20:49:04 +0000 (0:00:00.179) 0:00:05.963 ********* 2025-08-29 20:49:05.281305 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.281316 | orchestrator | 2025-08-29 20:49:05.281327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:05.281338 | orchestrator | Friday 29 August 2025 20:49:04 +0000 (0:00:00.180) 0:00:06.144 ********* 2025-08-29 20:49:05.281349 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:05.281360 | orchestrator | 2025-08-29 20:49:05.281370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:05.281382 | orchestrator | Friday 29 August 2025 20:49:05 +0000 (0:00:00.183) 0:00:06.327 ********* 2025-08-29 20:49:05.281399 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.371504 | orchestrator | 2025-08-29 20:49:12.371622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:12.371640 | orchestrator | Friday 29 August 2025 20:49:05 +0000 (0:00:00.193) 0:00:06.520 ********* 2025-08-29 20:49:12.371653 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 20:49:12.371666 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 20:49:12.371677 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 20:49:12.371688 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 20:49:12.371699 | orchestrator | 2025-08-29 20:49:12.371711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:12.371722 | orchestrator | Friday 29 August 2025 20:49:06 +0000 (0:00:00.897) 0:00:07.418 ********* 2025-08-29 20:49:12.371733 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.371793 | orchestrator | 2025-08-29 20:49:12.371805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:12.371816 | orchestrator | Friday 29 August 2025 20:49:06 +0000 (0:00:00.185) 0:00:07.603 ********* 2025-08-29 20:49:12.371827 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.371838 | orchestrator | 2025-08-29 20:49:12.371849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:12.371860 | orchestrator | Friday 29 August 2025 20:49:06 +0000 (0:00:00.192) 0:00:07.796 ********* 2025-08-29 20:49:12.371871 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.371881 | orchestrator | 2025-08-29 20:49:12.371892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:12.371903 | orchestrator | Friday 29 August 2025 20:49:06 +0000 (0:00:00.205) 0:00:08.001 ********* 2025-08-29 20:49:12.371913 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.371924 | orchestrator | 2025-08-29 20:49:12.371935 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 20:49:12.371946 | orchestrator | Friday 29 August 2025 20:49:06 +0000 (0:00:00.192) 0:00:08.194 ********* 2025-08-29 20:49:12.371957 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-08-29 20:49:12.371967 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-08-29 20:49:12.371978 | orchestrator | 2025-08-29 20:49:12.371989 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 20:49:12.372000 | orchestrator | Friday 29 August 2025 20:49:07 +0000 (0:00:00.167) 0:00:08.362 ********* 2025-08-29 20:49:12.372030 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.372043 | orchestrator | 2025-08-29 20:49:12.372056 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 20:49:12.372068 | orchestrator | Friday 29 August 2025 20:49:07 +0000 (0:00:00.137) 0:00:08.499 ********* 2025-08-29 20:49:12.372080 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.372092 | orchestrator | 2025-08-29 20:49:12.372104 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 20:49:12.372116 | orchestrator | Friday 29 August 2025 20:49:07 +0000 (0:00:00.131) 0:00:08.631 ********* 2025-08-29 20:49:12.372128 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.372140 | orchestrator | 2025-08-29 20:49:12.372176 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 20:49:12.372189 | orchestrator | Friday 29 August 2025 20:49:07 +0000 (0:00:00.132) 0:00:08.763 ********* 2025-08-29 20:49:12.372200 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:49:12.372213 | orchestrator | 2025-08-29 20:49:12.372225 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 20:49:12.372238 | orchestrator | Friday 29 August 2025 20:49:07 +0000 (0:00:00.131) 0:00:08.895 ********* 2025-08-29 20:49:12.372250 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'}}) 2025-08-29 20:49:12.372262 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79476f9b-63cb-5c74-926b-50a3eb682c43'}}) 2025-08-29 20:49:12.372274 | orchestrator | 2025-08-29 20:49:12.372287 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 20:49:12.372299 | orchestrator | Friday 29 August 2025 20:49:07 +0000 (0:00:00.155) 0:00:09.050 ********* 2025-08-29 20:49:12.372312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'}})  2025-08-29 20:49:12.372331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79476f9b-63cb-5c74-926b-50a3eb682c43'}})  2025-08-29 20:49:12.372344 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.372356 | orchestrator | 2025-08-29 20:49:12.372368 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 20:49:12.372381 | orchestrator | Friday 29 August 2025 20:49:07 +0000 (0:00:00.142) 0:00:09.193 ********* 2025-08-29 20:49:12.372393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'}})  2025-08-29 20:49:12.372404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79476f9b-63cb-5c74-926b-50a3eb682c43'}})  2025-08-29 20:49:12.372415 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.372426 | orchestrator | 2025-08-29 20:49:12.372437 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 20:49:12.372447 | orchestrator | Friday 29 August 2025 20:49:08 +0000 (0:00:00.150) 0:00:09.343 ********* 2025-08-29 20:49:12.372458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'}})  2025-08-29 20:49:12.372469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79476f9b-63cb-5c74-926b-50a3eb682c43'}})  2025-08-29 20:49:12.372480 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.372491 | orchestrator | 2025-08-29 20:49:12.372521 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 20:49:12.372533 | orchestrator | Friday 29 August 2025 20:49:08 +0000 (0:00:00.296) 0:00:09.639 ********* 2025-08-29 20:49:12.372544 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:49:12.372555 | orchestrator | 2025-08-29 20:49:12.372572 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 20:49:12.372583 | orchestrator | Friday 29 August 2025 20:49:08 +0000 (0:00:00.142) 0:00:09.782 ********* 2025-08-29 20:49:12.372594 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:49:12.372605 | orchestrator | 2025-08-29 20:49:12.372616 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 20:49:12.372627 | orchestrator | Friday 29 August 2025 20:49:08 +0000 (0:00:00.141) 0:00:09.923 ********* 2025-08-29 20:49:12.372638 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.372648 | orchestrator | 2025-08-29 20:49:12.372659 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 20:49:12.372670 | orchestrator | Friday 29 August 2025 20:49:08 +0000 (0:00:00.131) 0:00:10.055 ********* 2025-08-29 20:49:12.372681 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.372692 | orchestrator | 2025-08-29 20:49:12.372702 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 20:49:12.372720 | orchestrator | Friday 29 August 2025 20:49:08 +0000 (0:00:00.115) 0:00:10.171 ********* 2025-08-29 20:49:12.372732 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.372762 | orchestrator | 2025-08-29 20:49:12.372774 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 20:49:12.372785 | orchestrator | Friday 29 August 2025 20:49:09 +0000 (0:00:00.129) 0:00:10.300 ********* 2025-08-29 20:49:12.372796 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 20:49:12.372807 | orchestrator |  "ceph_osd_devices": { 2025-08-29 20:49:12.372818 | orchestrator |  "sdb": { 2025-08-29 20:49:12.372830 | orchestrator |  "osd_lvm_uuid": "028c3e14-b13d-554d-9ec8-e0bdecd4a1f0" 2025-08-29 20:49:12.372841 | orchestrator |  }, 2025-08-29 20:49:12.372852 | orchestrator |  "sdc": { 2025-08-29 20:49:12.372863 | orchestrator |  "osd_lvm_uuid": "79476f9b-63cb-5c74-926b-50a3eb682c43" 2025-08-29 20:49:12.372874 | orchestrator |  } 2025-08-29 20:49:12.372886 | orchestrator |  } 2025-08-29 20:49:12.372897 | orchestrator | } 2025-08-29 20:49:12.372908 | orchestrator | 2025-08-29 20:49:12.372919 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 20:49:12.372930 | orchestrator | Friday 29 August 2025 20:49:09 +0000 (0:00:00.139) 0:00:10.440 ********* 2025-08-29 20:49:12.372941 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.372951 | orchestrator | 2025-08-29 20:49:12.372962 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 20:49:12.372973 | orchestrator | Friday 29 August 2025 20:49:09 +0000 (0:00:00.118) 0:00:10.559 ********* 2025-08-29 20:49:12.372984 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.372995 | orchestrator | 2025-08-29 20:49:12.373005 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 20:49:12.373016 | orchestrator | Friday 29 August 2025 20:49:09 +0000 (0:00:00.141) 0:00:10.700 ********* 2025-08-29 20:49:12.373027 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:49:12.373038 | orchestrator | 2025-08-29 20:49:12.373048 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 20:49:12.373059 | orchestrator | Friday 29 August 2025 20:49:09 +0000 (0:00:00.129) 0:00:10.829 ********* 2025-08-29 20:49:12.373070 | orchestrator | changed: [testbed-node-3] => { 2025-08-29 20:49:12.373081 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 20:49:12.373092 | orchestrator |  "ceph_osd_devices": { 2025-08-29 20:49:12.373103 | orchestrator |  "sdb": { 2025-08-29 20:49:12.373114 | orchestrator |  "osd_lvm_uuid": "028c3e14-b13d-554d-9ec8-e0bdecd4a1f0" 2025-08-29 20:49:12.373125 | orchestrator |  }, 2025-08-29 20:49:12.373136 | orchestrator |  "sdc": { 2025-08-29 20:49:12.373146 | orchestrator |  "osd_lvm_uuid": "79476f9b-63cb-5c74-926b-50a3eb682c43" 2025-08-29 20:49:12.373157 | orchestrator |  } 2025-08-29 20:49:12.373168 | orchestrator |  }, 2025-08-29 20:49:12.373179 | orchestrator |  "lvm_volumes": [ 2025-08-29 20:49:12.373190 | orchestrator |  { 2025-08-29 20:49:12.373201 | orchestrator |  "data": "osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0", 2025-08-29 20:49:12.373212 | orchestrator |  "data_vg": "ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0" 2025-08-29 20:49:12.373223 | orchestrator |  }, 2025-08-29 20:49:12.373234 | orchestrator |  { 2025-08-29 20:49:12.373245 | orchestrator |  "data": "osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43", 2025-08-29 20:49:12.373256 | orchestrator |  "data_vg": "ceph-79476f9b-63cb-5c74-926b-50a3eb682c43" 2025-08-29 20:49:12.373267 | orchestrator |  } 2025-08-29 20:49:12.373277 | orchestrator |  ] 2025-08-29 20:49:12.373288 | orchestrator |  } 2025-08-29 20:49:12.373300 | orchestrator | } 2025-08-29 20:49:12.373310 | orchestrator | 2025-08-29 20:49:12.373327 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 20:49:12.373338 | orchestrator | Friday 29 August 2025 20:49:09 +0000 (0:00:00.201) 0:00:11.031 ********* 2025-08-29 20:49:12.373359 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 20:49:12.373370 | orchestrator | 2025-08-29 20:49:12.373381 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 20:49:12.373392 | orchestrator | 2025-08-29 20:49:12.373403 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 20:49:12.373413 | orchestrator | Friday 29 August 2025 20:49:11 +0000 (0:00:02.067) 0:00:13.098 ********* 2025-08-29 20:49:12.373424 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 20:49:12.373435 | orchestrator | 2025-08-29 20:49:12.373446 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 20:49:12.373457 | orchestrator | Friday 29 August 2025 20:49:12 +0000 (0:00:00.280) 0:00:13.379 ********* 2025-08-29 20:49:12.373468 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:49:12.373479 | orchestrator | 2025-08-29 20:49:12.373490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:12.373507 | orchestrator | Friday 29 August 2025 20:49:12 +0000 (0:00:00.233) 0:00:13.612 ********* 2025-08-29 20:49:19.731079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 20:49:19.731187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 20:49:19.731203 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 20:49:19.731215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 20:49:19.731226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 20:49:19.731237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 20:49:19.731249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 20:49:19.731260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 20:49:19.731271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 20:49:19.731282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 20:49:19.731293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 20:49:19.731304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 20:49:19.731315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 20:49:19.731327 | orchestrator | 2025-08-29 20:49:19.731343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731356 | orchestrator | Friday 29 August 2025 20:49:12 +0000 (0:00:00.386) 0:00:13.999 ********* 2025-08-29 20:49:19.731367 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.731380 | orchestrator | 2025-08-29 20:49:19.731391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731402 | orchestrator | Friday 29 August 2025 20:49:12 +0000 (0:00:00.219) 0:00:14.218 ********* 2025-08-29 20:49:19.731413 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.731424 | orchestrator | 2025-08-29 20:49:19.731434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731445 | orchestrator | Friday 29 August 2025 20:49:13 +0000 (0:00:00.206) 0:00:14.425 ********* 2025-08-29 20:49:19.731456 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.731467 | orchestrator | 2025-08-29 20:49:19.731478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731489 | orchestrator | Friday 29 August 2025 20:49:13 +0000 (0:00:00.200) 0:00:14.626 ********* 2025-08-29 20:49:19.731500 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.731511 | orchestrator | 2025-08-29 20:49:19.731544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731556 | orchestrator | Friday 29 August 2025 20:49:13 +0000 (0:00:00.175) 0:00:14.801 ********* 2025-08-29 20:49:19.731567 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.731577 | orchestrator | 2025-08-29 20:49:19.731588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731599 | orchestrator | Friday 29 August 2025 20:49:13 +0000 (0:00:00.177) 0:00:14.979 ********* 2025-08-29 20:49:19.731610 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.731623 | orchestrator | 2025-08-29 20:49:19.731635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731663 | orchestrator | Friday 29 August 2025 20:49:14 +0000 (0:00:00.551) 0:00:15.530 ********* 2025-08-29 20:49:19.731675 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.731688 | orchestrator | 2025-08-29 20:49:19.731700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731713 | orchestrator | Friday 29 August 2025 20:49:14 +0000 (0:00:00.202) 0:00:15.732 ********* 2025-08-29 20:49:19.731724 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.731737 | orchestrator | 2025-08-29 20:49:19.731770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731783 | orchestrator | Friday 29 August 2025 20:49:14 +0000 (0:00:00.203) 0:00:15.935 ********* 2025-08-29 20:49:19.731796 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0) 2025-08-29 20:49:19.731810 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0) 2025-08-29 20:49:19.731822 | orchestrator | 2025-08-29 20:49:19.731834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731846 | orchestrator | Friday 29 August 2025 20:49:15 +0000 (0:00:00.410) 0:00:16.346 ********* 2025-08-29 20:49:19.731859 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2) 2025-08-29 20:49:19.731871 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2) 2025-08-29 20:49:19.731883 | orchestrator | 2025-08-29 20:49:19.731895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731907 | orchestrator | Friday 29 August 2025 20:49:15 +0000 (0:00:00.406) 0:00:16.752 ********* 2025-08-29 20:49:19.731919 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042) 2025-08-29 20:49:19.731932 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042) 2025-08-29 20:49:19.731945 | orchestrator | 2025-08-29 20:49:19.731956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.731969 | orchestrator | Friday 29 August 2025 20:49:15 +0000 (0:00:00.399) 0:00:17.152 ********* 2025-08-29 20:49:19.731998 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df) 2025-08-29 20:49:19.732010 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df) 2025-08-29 20:49:19.732021 | orchestrator | 2025-08-29 20:49:19.732032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:19.732043 | orchestrator | Friday 29 August 2025 20:49:16 +0000 (0:00:00.401) 0:00:17.554 ********* 2025-08-29 20:49:19.732053 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 20:49:19.732064 | orchestrator | 2025-08-29 20:49:19.732075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:19.732086 | orchestrator | Friday 29 August 2025 20:49:16 +0000 (0:00:00.320) 0:00:17.874 ********* 2025-08-29 20:49:19.732097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 20:49:19.732108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 20:49:19.732128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 20:49:19.732140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 20:49:19.732150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 20:49:19.732161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 20:49:19.732172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 20:49:19.732183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 20:49:19.732194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 20:49:19.732204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 20:49:19.732215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 20:49:19.732226 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 20:49:19.732237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 20:49:19.732248 | orchestrator | 2025-08-29 20:49:19.732259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:19.732270 | orchestrator | Friday 29 August 2025 20:49:16 +0000 (0:00:00.362) 0:00:18.236 ********* 2025-08-29 20:49:19.732281 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.732291 | orchestrator | 2025-08-29 20:49:19.732302 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:19.732313 | orchestrator | Friday 29 August 2025 20:49:17 +0000 (0:00:00.185) 0:00:18.422 ********* 2025-08-29 20:49:19.732324 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.732334 | orchestrator | 2025-08-29 20:49:19.732351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:19.732363 | orchestrator | Friday 29 August 2025 20:49:17 +0000 (0:00:00.589) 0:00:19.011 ********* 2025-08-29 20:49:19.732373 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.732384 | orchestrator | 2025-08-29 20:49:19.732395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:19.732406 | orchestrator | Friday 29 August 2025 20:49:17 +0000 (0:00:00.209) 0:00:19.221 ********* 2025-08-29 20:49:19.732417 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.732428 | orchestrator | 2025-08-29 20:49:19.732439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:19.732450 | orchestrator | Friday 29 August 2025 20:49:18 +0000 (0:00:00.193) 0:00:19.415 ********* 2025-08-29 20:49:19.732460 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.732471 | orchestrator | 2025-08-29 20:49:19.732482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:19.732492 | orchestrator | Friday 29 August 2025 20:49:18 +0000 (0:00:00.189) 0:00:19.605 ********* 2025-08-29 20:49:19.732503 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.732514 | orchestrator | 2025-08-29 20:49:19.732525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:19.732535 | orchestrator | Friday 29 August 2025 20:49:18 +0000 (0:00:00.189) 0:00:19.795 ********* 2025-08-29 20:49:19.732546 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.732557 | orchestrator | 2025-08-29 20:49:19.732568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:19.732578 | orchestrator | Friday 29 August 2025 20:49:18 +0000 (0:00:00.183) 0:00:19.978 ********* 2025-08-29 20:49:19.732589 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.732600 | orchestrator | 2025-08-29 20:49:19.732610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:19.732628 | orchestrator | Friday 29 August 2025 20:49:18 +0000 (0:00:00.195) 0:00:20.174 ********* 2025-08-29 20:49:19.732639 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 20:49:19.732651 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 20:49:19.732662 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 20:49:19.732673 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 20:49:19.732683 | orchestrator | 2025-08-29 20:49:19.732694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:19.732705 | orchestrator | Friday 29 August 2025 20:49:19 +0000 (0:00:00.601) 0:00:20.775 ********* 2025-08-29 20:49:19.732716 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:19.732726 | orchestrator | 2025-08-29 20:49:19.732759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:25.252980 | orchestrator | Friday 29 August 2025 20:49:19 +0000 (0:00:00.198) 0:00:20.974 ********* 2025-08-29 20:49:25.253075 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.253092 | orchestrator | 2025-08-29 20:49:25.253105 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:25.253117 | orchestrator | Friday 29 August 2025 20:49:19 +0000 (0:00:00.205) 0:00:21.179 ********* 2025-08-29 20:49:25.253128 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.253139 | orchestrator | 2025-08-29 20:49:25.253150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:25.253161 | orchestrator | Friday 29 August 2025 20:49:20 +0000 (0:00:00.193) 0:00:21.372 ********* 2025-08-29 20:49:25.253172 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.253182 | orchestrator | 2025-08-29 20:49:25.253193 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 20:49:25.253204 | orchestrator | Friday 29 August 2025 20:49:20 +0000 (0:00:00.187) 0:00:21.559 ********* 2025-08-29 20:49:25.253214 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-08-29 20:49:25.253225 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-08-29 20:49:25.253236 | orchestrator | 2025-08-29 20:49:25.253246 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 20:49:25.253257 | orchestrator | Friday 29 August 2025 20:49:20 +0000 (0:00:00.368) 0:00:21.928 ********* 2025-08-29 20:49:25.253268 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.253278 | orchestrator | 2025-08-29 20:49:25.253289 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 20:49:25.253300 | orchestrator | Friday 29 August 2025 20:49:20 +0000 (0:00:00.120) 0:00:22.048 ********* 2025-08-29 20:49:25.253311 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.253322 | orchestrator | 2025-08-29 20:49:25.253333 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 20:49:25.253343 | orchestrator | Friday 29 August 2025 20:49:20 +0000 (0:00:00.131) 0:00:22.180 ********* 2025-08-29 20:49:25.253354 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.253364 | orchestrator | 2025-08-29 20:49:25.253375 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 20:49:25.253386 | orchestrator | Friday 29 August 2025 20:49:21 +0000 (0:00:00.130) 0:00:22.310 ********* 2025-08-29 20:49:25.253396 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:49:25.253408 | orchestrator | 2025-08-29 20:49:25.253418 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 20:49:25.253429 | orchestrator | Friday 29 August 2025 20:49:21 +0000 (0:00:00.138) 0:00:22.449 ********* 2025-08-29 20:49:25.253440 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76a76f98-f10a-56c2-85c8-c111ab4c87c6'}}) 2025-08-29 20:49:25.253451 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f3fee7d3-6bcf-515f-a6c3-caef0862fd99'}}) 2025-08-29 20:49:25.253461 | orchestrator | 2025-08-29 20:49:25.253472 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 20:49:25.253505 | orchestrator | Friday 29 August 2025 20:49:21 +0000 (0:00:00.178) 0:00:22.628 ********* 2025-08-29 20:49:25.253517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76a76f98-f10a-56c2-85c8-c111ab4c87c6'}})  2025-08-29 20:49:25.253531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f3fee7d3-6bcf-515f-a6c3-caef0862fd99'}})  2025-08-29 20:49:25.253543 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.253555 | orchestrator | 2025-08-29 20:49:25.253582 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 20:49:25.253595 | orchestrator | Friday 29 August 2025 20:49:21 +0000 (0:00:00.145) 0:00:22.773 ********* 2025-08-29 20:49:25.253607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76a76f98-f10a-56c2-85c8-c111ab4c87c6'}})  2025-08-29 20:49:25.253620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f3fee7d3-6bcf-515f-a6c3-caef0862fd99'}})  2025-08-29 20:49:25.253632 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.253644 | orchestrator | 2025-08-29 20:49:25.253656 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 20:49:25.253668 | orchestrator | Friday 29 August 2025 20:49:21 +0000 (0:00:00.143) 0:00:22.917 ********* 2025-08-29 20:49:25.253680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76a76f98-f10a-56c2-85c8-c111ab4c87c6'}})  2025-08-29 20:49:25.253693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f3fee7d3-6bcf-515f-a6c3-caef0862fd99'}})  2025-08-29 20:49:25.253705 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.253717 | orchestrator | 2025-08-29 20:49:25.253730 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 20:49:25.253742 | orchestrator | Friday 29 August 2025 20:49:21 +0000 (0:00:00.137) 0:00:23.055 ********* 2025-08-29 20:49:25.253808 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:49:25.253822 | orchestrator | 2025-08-29 20:49:25.253834 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 20:49:25.253846 | orchestrator | Friday 29 August 2025 20:49:21 +0000 (0:00:00.151) 0:00:23.207 ********* 2025-08-29 20:49:25.253858 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:49:25.253870 | orchestrator | 2025-08-29 20:49:25.253883 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 20:49:25.253893 | orchestrator | Friday 29 August 2025 20:49:22 +0000 (0:00:00.192) 0:00:23.399 ********* 2025-08-29 20:49:25.253904 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.253915 | orchestrator | 2025-08-29 20:49:25.253943 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 20:49:25.253955 | orchestrator | Friday 29 August 2025 20:49:22 +0000 (0:00:00.124) 0:00:23.524 ********* 2025-08-29 20:49:25.253966 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.253976 | orchestrator | 2025-08-29 20:49:25.253987 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 20:49:25.253998 | orchestrator | Friday 29 August 2025 20:49:22 +0000 (0:00:00.244) 0:00:23.769 ********* 2025-08-29 20:49:25.254009 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.254063 | orchestrator | 2025-08-29 20:49:25.254075 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 20:49:25.254086 | orchestrator | Friday 29 August 2025 20:49:22 +0000 (0:00:00.116) 0:00:23.885 ********* 2025-08-29 20:49:25.254097 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 20:49:25.254108 | orchestrator |  "ceph_osd_devices": { 2025-08-29 20:49:25.254119 | orchestrator |  "sdb": { 2025-08-29 20:49:25.254131 | orchestrator |  "osd_lvm_uuid": "76a76f98-f10a-56c2-85c8-c111ab4c87c6" 2025-08-29 20:49:25.254142 | orchestrator |  }, 2025-08-29 20:49:25.254153 | orchestrator |  "sdc": { 2025-08-29 20:49:25.254165 | orchestrator |  "osd_lvm_uuid": "f3fee7d3-6bcf-515f-a6c3-caef0862fd99" 2025-08-29 20:49:25.254188 | orchestrator |  } 2025-08-29 20:49:25.254199 | orchestrator |  } 2025-08-29 20:49:25.254210 | orchestrator | } 2025-08-29 20:49:25.254221 | orchestrator | 2025-08-29 20:49:25.254232 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 20:49:25.254243 | orchestrator | Friday 29 August 2025 20:49:22 +0000 (0:00:00.129) 0:00:24.015 ********* 2025-08-29 20:49:25.254254 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.254265 | orchestrator | 2025-08-29 20:49:25.254276 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 20:49:25.254287 | orchestrator | Friday 29 August 2025 20:49:22 +0000 (0:00:00.190) 0:00:24.205 ********* 2025-08-29 20:49:25.254297 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.254308 | orchestrator | 2025-08-29 20:49:25.254319 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 20:49:25.254330 | orchestrator | Friday 29 August 2025 20:49:23 +0000 (0:00:00.100) 0:00:24.306 ********* 2025-08-29 20:49:25.254341 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:49:25.254352 | orchestrator | 2025-08-29 20:49:25.254362 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 20:49:25.254373 | orchestrator | Friday 29 August 2025 20:49:23 +0000 (0:00:00.114) 0:00:24.420 ********* 2025-08-29 20:49:25.254384 | orchestrator | changed: [testbed-node-4] => { 2025-08-29 20:49:25.254395 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 20:49:25.254406 | orchestrator |  "ceph_osd_devices": { 2025-08-29 20:49:25.254417 | orchestrator |  "sdb": { 2025-08-29 20:49:25.254428 | orchestrator |  "osd_lvm_uuid": "76a76f98-f10a-56c2-85c8-c111ab4c87c6" 2025-08-29 20:49:25.254439 | orchestrator |  }, 2025-08-29 20:49:25.254450 | orchestrator |  "sdc": { 2025-08-29 20:49:25.254461 | orchestrator |  "osd_lvm_uuid": "f3fee7d3-6bcf-515f-a6c3-caef0862fd99" 2025-08-29 20:49:25.254472 | orchestrator |  } 2025-08-29 20:49:25.254483 | orchestrator |  }, 2025-08-29 20:49:25.254494 | orchestrator |  "lvm_volumes": [ 2025-08-29 20:49:25.254505 | orchestrator |  { 2025-08-29 20:49:25.254516 | orchestrator |  "data": "osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6", 2025-08-29 20:49:25.254527 | orchestrator |  "data_vg": "ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6" 2025-08-29 20:49:25.254538 | orchestrator |  }, 2025-08-29 20:49:25.254549 | orchestrator |  { 2025-08-29 20:49:25.254559 | orchestrator |  "data": "osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99", 2025-08-29 20:49:25.254570 | orchestrator |  "data_vg": "ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99" 2025-08-29 20:49:25.254581 | orchestrator |  } 2025-08-29 20:49:25.254592 | orchestrator |  ] 2025-08-29 20:49:25.254603 | orchestrator |  } 2025-08-29 20:49:25.254614 | orchestrator | } 2025-08-29 20:49:25.254625 | orchestrator | 2025-08-29 20:49:25.254636 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 20:49:25.254647 | orchestrator | Friday 29 August 2025 20:49:23 +0000 (0:00:00.190) 0:00:24.611 ********* 2025-08-29 20:49:25.254658 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 20:49:25.254669 | orchestrator | 2025-08-29 20:49:25.254680 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 20:49:25.254691 | orchestrator | 2025-08-29 20:49:25.254701 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 20:49:25.254712 | orchestrator | Friday 29 August 2025 20:49:24 +0000 (0:00:00.757) 0:00:25.368 ********* 2025-08-29 20:49:25.254723 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 20:49:25.254734 | orchestrator | 2025-08-29 20:49:25.254744 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 20:49:25.254785 | orchestrator | Friday 29 August 2025 20:49:24 +0000 (0:00:00.368) 0:00:25.737 ********* 2025-08-29 20:49:25.254796 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:49:25.254814 | orchestrator | 2025-08-29 20:49:25.254832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:25.254843 | orchestrator | Friday 29 August 2025 20:49:24 +0000 (0:00:00.472) 0:00:26.209 ********* 2025-08-29 20:49:25.254854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 20:49:25.254865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 20:49:25.254875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 20:49:25.254886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 20:49:25.254897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 20:49:25.254907 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 20:49:25.254926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 20:49:31.787372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 20:49:31.787468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 20:49:31.787483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 20:49:31.787495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 20:49:31.787506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 20:49:31.787518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 20:49:31.787529 | orchestrator | 2025-08-29 20:49:31.787541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.787553 | orchestrator | Friday 29 August 2025 20:49:25 +0000 (0:00:00.285) 0:00:26.494 ********* 2025-08-29 20:49:31.787564 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.787575 | orchestrator | 2025-08-29 20:49:31.787586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.787597 | orchestrator | Friday 29 August 2025 20:49:25 +0000 (0:00:00.149) 0:00:26.644 ********* 2025-08-29 20:49:31.787608 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.787618 | orchestrator | 2025-08-29 20:49:31.787629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.787640 | orchestrator | Friday 29 August 2025 20:49:25 +0000 (0:00:00.147) 0:00:26.791 ********* 2025-08-29 20:49:31.787650 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.787661 | orchestrator | 2025-08-29 20:49:31.787672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.787682 | orchestrator | Friday 29 August 2025 20:49:25 +0000 (0:00:00.149) 0:00:26.940 ********* 2025-08-29 20:49:31.787693 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.787704 | orchestrator | 2025-08-29 20:49:31.787715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.787725 | orchestrator | Friday 29 August 2025 20:49:25 +0000 (0:00:00.144) 0:00:27.084 ********* 2025-08-29 20:49:31.787736 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.787746 | orchestrator | 2025-08-29 20:49:31.787800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.787812 | orchestrator | Friday 29 August 2025 20:49:25 +0000 (0:00:00.150) 0:00:27.234 ********* 2025-08-29 20:49:31.787823 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.787834 | orchestrator | 2025-08-29 20:49:31.787845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.787856 | orchestrator | Friday 29 August 2025 20:49:26 +0000 (0:00:00.147) 0:00:27.382 ********* 2025-08-29 20:49:31.787866 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.787877 | orchestrator | 2025-08-29 20:49:31.787909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.787921 | orchestrator | Friday 29 August 2025 20:49:26 +0000 (0:00:00.142) 0:00:27.524 ********* 2025-08-29 20:49:31.787933 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.787946 | orchestrator | 2025-08-29 20:49:31.787959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.787971 | orchestrator | Friday 29 August 2025 20:49:26 +0000 (0:00:00.130) 0:00:27.655 ********* 2025-08-29 20:49:31.787984 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38) 2025-08-29 20:49:31.788007 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38) 2025-08-29 20:49:31.788020 | orchestrator | 2025-08-29 20:49:31.788033 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.788045 | orchestrator | Friday 29 August 2025 20:49:26 +0000 (0:00:00.450) 0:00:28.106 ********* 2025-08-29 20:49:31.788057 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88) 2025-08-29 20:49:31.788069 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88) 2025-08-29 20:49:31.788081 | orchestrator | 2025-08-29 20:49:31.788094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.788106 | orchestrator | Friday 29 August 2025 20:49:27 +0000 (0:00:00.615) 0:00:28.721 ********* 2025-08-29 20:49:31.788118 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59) 2025-08-29 20:49:31.788130 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59) 2025-08-29 20:49:31.788143 | orchestrator | 2025-08-29 20:49:31.788155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.788167 | orchestrator | Friday 29 August 2025 20:49:27 +0000 (0:00:00.363) 0:00:29.085 ********* 2025-08-29 20:49:31.788179 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe) 2025-08-29 20:49:31.788192 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe) 2025-08-29 20:49:31.788205 | orchestrator | 2025-08-29 20:49:31.788216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:49:31.788228 | orchestrator | Friday 29 August 2025 20:49:28 +0000 (0:00:00.350) 0:00:29.436 ********* 2025-08-29 20:49:31.788240 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 20:49:31.788252 | orchestrator | 2025-08-29 20:49:31.788264 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788277 | orchestrator | Friday 29 August 2025 20:49:28 +0000 (0:00:00.291) 0:00:29.728 ********* 2025-08-29 20:49:31.788304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 20:49:31.788316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 20:49:31.788327 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 20:49:31.788338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 20:49:31.788348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 20:49:31.788359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 20:49:31.788385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 20:49:31.788397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 20:49:31.788407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 20:49:31.788429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 20:49:31.788441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 20:49:31.788451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 20:49:31.788462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 20:49:31.788473 | orchestrator | 2025-08-29 20:49:31.788483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788494 | orchestrator | Friday 29 August 2025 20:49:28 +0000 (0:00:00.333) 0:00:30.061 ********* 2025-08-29 20:49:31.788505 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.788515 | orchestrator | 2025-08-29 20:49:31.788526 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788537 | orchestrator | Friday 29 August 2025 20:49:28 +0000 (0:00:00.171) 0:00:30.233 ********* 2025-08-29 20:49:31.788548 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.788559 | orchestrator | 2025-08-29 20:49:31.788570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788580 | orchestrator | Friday 29 August 2025 20:49:29 +0000 (0:00:00.162) 0:00:30.395 ********* 2025-08-29 20:49:31.788591 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.788602 | orchestrator | 2025-08-29 20:49:31.788613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788628 | orchestrator | Friday 29 August 2025 20:49:29 +0000 (0:00:00.179) 0:00:30.575 ********* 2025-08-29 20:49:31.788639 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.788650 | orchestrator | 2025-08-29 20:49:31.788661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788672 | orchestrator | Friday 29 August 2025 20:49:29 +0000 (0:00:00.176) 0:00:30.751 ********* 2025-08-29 20:49:31.788682 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.788693 | orchestrator | 2025-08-29 20:49:31.788704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788715 | orchestrator | Friday 29 August 2025 20:49:29 +0000 (0:00:00.184) 0:00:30.936 ********* 2025-08-29 20:49:31.788726 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.788736 | orchestrator | 2025-08-29 20:49:31.788747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788778 | orchestrator | Friday 29 August 2025 20:49:30 +0000 (0:00:00.510) 0:00:31.447 ********* 2025-08-29 20:49:31.788789 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.788800 | orchestrator | 2025-08-29 20:49:31.788810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788821 | orchestrator | Friday 29 August 2025 20:49:30 +0000 (0:00:00.176) 0:00:31.623 ********* 2025-08-29 20:49:31.788832 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.788842 | orchestrator | 2025-08-29 20:49:31.788853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788864 | orchestrator | Friday 29 August 2025 20:49:30 +0000 (0:00:00.173) 0:00:31.797 ********* 2025-08-29 20:49:31.788874 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 20:49:31.788885 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 20:49:31.788897 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 20:49:31.788907 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 20:49:31.788918 | orchestrator | 2025-08-29 20:49:31.788929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788939 | orchestrator | Friday 29 August 2025 20:49:31 +0000 (0:00:00.493) 0:00:32.291 ********* 2025-08-29 20:49:31.788950 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.788961 | orchestrator | 2025-08-29 20:49:31.788971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.788982 | orchestrator | Friday 29 August 2025 20:49:31 +0000 (0:00:00.165) 0:00:32.457 ********* 2025-08-29 20:49:31.789000 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.789011 | orchestrator | 2025-08-29 20:49:31.789022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.789032 | orchestrator | Friday 29 August 2025 20:49:31 +0000 (0:00:00.173) 0:00:32.631 ********* 2025-08-29 20:49:31.789043 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.789054 | orchestrator | 2025-08-29 20:49:31.789064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:49:31.789075 | orchestrator | Friday 29 August 2025 20:49:31 +0000 (0:00:00.174) 0:00:32.805 ********* 2025-08-29 20:49:31.789086 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:31.789096 | orchestrator | 2025-08-29 20:49:31.789107 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 20:49:31.789124 | orchestrator | Friday 29 August 2025 20:49:31 +0000 (0:00:00.223) 0:00:33.028 ********* 2025-08-29 20:49:35.347151 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-08-29 20:49:35.347242 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-08-29 20:49:35.347257 | orchestrator | 2025-08-29 20:49:35.347269 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 20:49:35.347280 | orchestrator | Friday 29 August 2025 20:49:31 +0000 (0:00:00.149) 0:00:33.178 ********* 2025-08-29 20:49:35.347292 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.347303 | orchestrator | 2025-08-29 20:49:35.347314 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 20:49:35.347325 | orchestrator | Friday 29 August 2025 20:49:32 +0000 (0:00:00.106) 0:00:33.284 ********* 2025-08-29 20:49:35.347336 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.347347 | orchestrator | 2025-08-29 20:49:35.347358 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 20:49:35.347369 | orchestrator | Friday 29 August 2025 20:49:32 +0000 (0:00:00.101) 0:00:33.385 ********* 2025-08-29 20:49:35.347379 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.347390 | orchestrator | 2025-08-29 20:49:35.347401 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 20:49:35.347412 | orchestrator | Friday 29 August 2025 20:49:32 +0000 (0:00:00.117) 0:00:33.503 ********* 2025-08-29 20:49:35.347423 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:49:35.347434 | orchestrator | 2025-08-29 20:49:35.347445 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 20:49:35.347455 | orchestrator | Friday 29 August 2025 20:49:32 +0000 (0:00:00.241) 0:00:33.744 ********* 2025-08-29 20:49:35.347466 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '275f26f1-4e1c-5372-9190-a1521a972d04'}}) 2025-08-29 20:49:35.347479 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c5db720f-fb16-50b5-adff-95cbe6288183'}}) 2025-08-29 20:49:35.347490 | orchestrator | 2025-08-29 20:49:35.347501 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 20:49:35.347512 | orchestrator | Friday 29 August 2025 20:49:32 +0000 (0:00:00.146) 0:00:33.891 ********* 2025-08-29 20:49:35.347523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '275f26f1-4e1c-5372-9190-a1521a972d04'}})  2025-08-29 20:49:35.347535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c5db720f-fb16-50b5-adff-95cbe6288183'}})  2025-08-29 20:49:35.347546 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.347557 | orchestrator | 2025-08-29 20:49:35.347567 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 20:49:35.347582 | orchestrator | Friday 29 August 2025 20:49:32 +0000 (0:00:00.129) 0:00:34.020 ********* 2025-08-29 20:49:35.347602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '275f26f1-4e1c-5372-9190-a1521a972d04'}})  2025-08-29 20:49:35.347622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c5db720f-fb16-50b5-adff-95cbe6288183'}})  2025-08-29 20:49:35.347666 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.347689 | orchestrator | 2025-08-29 20:49:35.347716 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 20:49:35.347735 | orchestrator | Friday 29 August 2025 20:49:32 +0000 (0:00:00.150) 0:00:34.171 ********* 2025-08-29 20:49:35.347793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '275f26f1-4e1c-5372-9190-a1521a972d04'}})  2025-08-29 20:49:35.347833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c5db720f-fb16-50b5-adff-95cbe6288183'}})  2025-08-29 20:49:35.347847 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.347860 | orchestrator | 2025-08-29 20:49:35.347872 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 20:49:35.347884 | orchestrator | Friday 29 August 2025 20:49:33 +0000 (0:00:00.129) 0:00:34.300 ********* 2025-08-29 20:49:35.347896 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:49:35.347908 | orchestrator | 2025-08-29 20:49:35.347920 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 20:49:35.347931 | orchestrator | Friday 29 August 2025 20:49:33 +0000 (0:00:00.137) 0:00:34.437 ********* 2025-08-29 20:49:35.347943 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:49:35.347955 | orchestrator | 2025-08-29 20:49:35.347967 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 20:49:35.347979 | orchestrator | Friday 29 August 2025 20:49:33 +0000 (0:00:00.116) 0:00:34.553 ********* 2025-08-29 20:49:35.347991 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.348003 | orchestrator | 2025-08-29 20:49:35.348016 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 20:49:35.348028 | orchestrator | Friday 29 August 2025 20:49:33 +0000 (0:00:00.105) 0:00:34.658 ********* 2025-08-29 20:49:35.348040 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.348051 | orchestrator | 2025-08-29 20:49:35.348061 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 20:49:35.348072 | orchestrator | Friday 29 August 2025 20:49:33 +0000 (0:00:00.124) 0:00:34.783 ********* 2025-08-29 20:49:35.348082 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.348093 | orchestrator | 2025-08-29 20:49:35.348103 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 20:49:35.348114 | orchestrator | Friday 29 August 2025 20:49:33 +0000 (0:00:00.126) 0:00:34.909 ********* 2025-08-29 20:49:35.348125 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 20:49:35.348135 | orchestrator |  "ceph_osd_devices": { 2025-08-29 20:49:35.348146 | orchestrator |  "sdb": { 2025-08-29 20:49:35.348158 | orchestrator |  "osd_lvm_uuid": "275f26f1-4e1c-5372-9190-a1521a972d04" 2025-08-29 20:49:35.348187 | orchestrator |  }, 2025-08-29 20:49:35.348198 | orchestrator |  "sdc": { 2025-08-29 20:49:35.348210 | orchestrator |  "osd_lvm_uuid": "c5db720f-fb16-50b5-adff-95cbe6288183" 2025-08-29 20:49:35.348221 | orchestrator |  } 2025-08-29 20:49:35.348232 | orchestrator |  } 2025-08-29 20:49:35.348243 | orchestrator | } 2025-08-29 20:49:35.348254 | orchestrator | 2025-08-29 20:49:35.348265 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 20:49:35.348276 | orchestrator | Friday 29 August 2025 20:49:33 +0000 (0:00:00.108) 0:00:35.018 ********* 2025-08-29 20:49:35.348287 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.348297 | orchestrator | 2025-08-29 20:49:35.348308 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 20:49:35.348319 | orchestrator | Friday 29 August 2025 20:49:33 +0000 (0:00:00.113) 0:00:35.131 ********* 2025-08-29 20:49:35.348329 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.348340 | orchestrator | 2025-08-29 20:49:35.348351 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 20:49:35.348374 | orchestrator | Friday 29 August 2025 20:49:34 +0000 (0:00:00.244) 0:00:35.376 ********* 2025-08-29 20:49:35.348385 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:49:35.348395 | orchestrator | 2025-08-29 20:49:35.348406 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 20:49:35.348417 | orchestrator | Friday 29 August 2025 20:49:34 +0000 (0:00:00.144) 0:00:35.520 ********* 2025-08-29 20:49:35.348428 | orchestrator | changed: [testbed-node-5] => { 2025-08-29 20:49:35.348438 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 20:49:35.348449 | orchestrator |  "ceph_osd_devices": { 2025-08-29 20:49:35.348460 | orchestrator |  "sdb": { 2025-08-29 20:49:35.348471 | orchestrator |  "osd_lvm_uuid": "275f26f1-4e1c-5372-9190-a1521a972d04" 2025-08-29 20:49:35.348482 | orchestrator |  }, 2025-08-29 20:49:35.348493 | orchestrator |  "sdc": { 2025-08-29 20:49:35.348504 | orchestrator |  "osd_lvm_uuid": "c5db720f-fb16-50b5-adff-95cbe6288183" 2025-08-29 20:49:35.348514 | orchestrator |  } 2025-08-29 20:49:35.348525 | orchestrator |  }, 2025-08-29 20:49:35.348536 | orchestrator |  "lvm_volumes": [ 2025-08-29 20:49:35.348547 | orchestrator |  { 2025-08-29 20:49:35.348558 | orchestrator |  "data": "osd-block-275f26f1-4e1c-5372-9190-a1521a972d04", 2025-08-29 20:49:35.348569 | orchestrator |  "data_vg": "ceph-275f26f1-4e1c-5372-9190-a1521a972d04" 2025-08-29 20:49:35.348580 | orchestrator |  }, 2025-08-29 20:49:35.348590 | orchestrator |  { 2025-08-29 20:49:35.348601 | orchestrator |  "data": "osd-block-c5db720f-fb16-50b5-adff-95cbe6288183", 2025-08-29 20:49:35.348612 | orchestrator |  "data_vg": "ceph-c5db720f-fb16-50b5-adff-95cbe6288183" 2025-08-29 20:49:35.348623 | orchestrator |  } 2025-08-29 20:49:35.348634 | orchestrator |  ] 2025-08-29 20:49:35.348644 | orchestrator |  } 2025-08-29 20:49:35.348655 | orchestrator | } 2025-08-29 20:49:35.348670 | orchestrator | 2025-08-29 20:49:35.348681 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 20:49:35.348692 | orchestrator | Friday 29 August 2025 20:49:34 +0000 (0:00:00.185) 0:00:35.706 ********* 2025-08-29 20:49:35.348702 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 20:49:35.348713 | orchestrator | 2025-08-29 20:49:35.348724 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:49:35.348735 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 20:49:35.348746 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 20:49:35.348786 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 20:49:35.348806 | orchestrator | 2025-08-29 20:49:35.348823 | orchestrator | 2025-08-29 20:49:35.348840 | orchestrator | 2025-08-29 20:49:35.348858 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:49:35.348877 | orchestrator | Friday 29 August 2025 20:49:35 +0000 (0:00:00.870) 0:00:36.577 ********* 2025-08-29 20:49:35.348895 | orchestrator | =============================================================================== 2025-08-29 20:49:35.348914 | orchestrator | Write configuration file ------------------------------------------------ 3.70s 2025-08-29 20:49:35.348929 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2025-08-29 20:49:35.348939 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2025-08-29 20:49:35.348950 | orchestrator | Get initial list of available block devices ----------------------------- 0.91s 2025-08-29 20:49:35.348961 | orchestrator | Add known partitions to the list of available block devices ------------- 0.90s 2025-08-29 20:49:35.348971 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.86s 2025-08-29 20:49:35.348991 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.69s 2025-08-29 20:49:35.349002 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-08-29 20:49:35.349013 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-08-29 20:49:35.349024 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-08-29 20:49:35.349034 | orchestrator | Print configuration data ------------------------------------------------ 0.58s 2025-08-29 20:49:35.349045 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.56s 2025-08-29 20:49:35.349056 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-08-29 20:49:35.349066 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2025-08-29 20:49:35.349086 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.51s 2025-08-29 20:49:35.574852 | orchestrator | Add known partitions to the list of available block devices ------------- 0.51s 2025-08-29 20:49:35.574931 | orchestrator | Add known partitions to the list of available block devices ------------- 0.49s 2025-08-29 20:49:35.574943 | orchestrator | Add known links to the list of available block devices ------------------ 0.49s 2025-08-29 20:49:35.574953 | orchestrator | Print DB devices -------------------------------------------------------- 0.49s 2025-08-29 20:49:35.574963 | orchestrator | Set WAL devices config data --------------------------------------------- 0.48s 2025-08-29 20:49:57.621288 | orchestrator | 2025-08-29 20:49:57 | INFO  | Task dd62e4a5-e8b0-4544-a90c-a2af271bc8f3 (sync inventory) is running in background. Output coming soon. 2025-08-29 20:50:14.486158 | orchestrator | 2025-08-29 20:49:58 | INFO  | Starting group_vars file reorganization 2025-08-29 20:50:14.486444 | orchestrator | 2025-08-29 20:49:58 | INFO  | Moved 0 file(s) to their respective directories 2025-08-29 20:50:14.486466 | orchestrator | 2025-08-29 20:49:58 | INFO  | Group_vars file reorganization completed 2025-08-29 20:50:14.486478 | orchestrator | 2025-08-29 20:50:00 | INFO  | Starting variable preparation from inventory 2025-08-29 20:50:14.486490 | orchestrator | 2025-08-29 20:50:01 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-08-29 20:50:14.486502 | orchestrator | 2025-08-29 20:50:01 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-08-29 20:50:14.486513 | orchestrator | 2025-08-29 20:50:01 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-08-29 20:50:14.486547 | orchestrator | 2025-08-29 20:50:01 | INFO  | 3 file(s) written, 6 host(s) processed 2025-08-29 20:50:14.486564 | orchestrator | 2025-08-29 20:50:01 | INFO  | Variable preparation completed 2025-08-29 20:50:14.486575 | orchestrator | 2025-08-29 20:50:02 | INFO  | Starting inventory overwrite handling 2025-08-29 20:50:14.486586 | orchestrator | 2025-08-29 20:50:02 | INFO  | Handling group overwrites in 99-overwrite 2025-08-29 20:50:14.486597 | orchestrator | 2025-08-29 20:50:02 | INFO  | Removing group frr:children from 60-generic 2025-08-29 20:50:14.486613 | orchestrator | 2025-08-29 20:50:02 | INFO  | Removing group storage:children from 50-kolla 2025-08-29 20:50:14.486627 | orchestrator | 2025-08-29 20:50:02 | INFO  | Removing group netbird:children from 50-infrastruture 2025-08-29 20:50:14.486639 | orchestrator | 2025-08-29 20:50:02 | INFO  | Removing group ceph-rgw from 50-ceph 2025-08-29 20:50:14.486652 | orchestrator | 2025-08-29 20:50:02 | INFO  | Removing group ceph-mds from 50-ceph 2025-08-29 20:50:14.486665 | orchestrator | 2025-08-29 20:50:02 | INFO  | Handling group overwrites in 20-roles 2025-08-29 20:50:14.486677 | orchestrator | 2025-08-29 20:50:02 | INFO  | Removing group k3s_node from 50-infrastruture 2025-08-29 20:50:14.486711 | orchestrator | 2025-08-29 20:50:02 | INFO  | Removed 6 group(s) in total 2025-08-29 20:50:14.486724 | orchestrator | 2025-08-29 20:50:02 | INFO  | Inventory overwrite handling completed 2025-08-29 20:50:14.486737 | orchestrator | 2025-08-29 20:50:03 | INFO  | Starting merge of inventory files 2025-08-29 20:50:14.486749 | orchestrator | 2025-08-29 20:50:03 | INFO  | Inventory files merged successfully 2025-08-29 20:50:14.486761 | orchestrator | 2025-08-29 20:50:07 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-08-29 20:50:14.486773 | orchestrator | 2025-08-29 20:50:13 | INFO  | Successfully wrote ClusterShell configuration 2025-08-29 20:50:14.486806 | orchestrator | [master 93b9440] 2025-08-29-20-50 2025-08-29 20:50:14.486819 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-08-29 20:50:16.072606 | orchestrator | 2025-08-29 20:50:16 | INFO  | Task bc1474dc-4ae9-4293-9b69-225b392d3447 (ceph-create-lvm-devices) was prepared for execution. 2025-08-29 20:50:16.072697 | orchestrator | 2025-08-29 20:50:16 | INFO  | It takes a moment until task bc1474dc-4ae9-4293-9b69-225b392d3447 (ceph-create-lvm-devices) has been started and output is visible here. 2025-08-29 20:50:26.480086 | orchestrator | 2025-08-29 20:50:26.480238 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 20:50:26.480263 | orchestrator | 2025-08-29 20:50:26.480282 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 20:50:26.480302 | orchestrator | Friday 29 August 2025 20:50:19 +0000 (0:00:00.236) 0:00:00.236 ********* 2025-08-29 20:50:26.480323 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 20:50:26.480342 | orchestrator | 2025-08-29 20:50:26.480361 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 20:50:26.480380 | orchestrator | Friday 29 August 2025 20:50:19 +0000 (0:00:00.281) 0:00:00.517 ********* 2025-08-29 20:50:26.480397 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:50:26.480410 | orchestrator | 2025-08-29 20:50:26.480421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.480432 | orchestrator | Friday 29 August 2025 20:50:20 +0000 (0:00:00.207) 0:00:00.725 ********* 2025-08-29 20:50:26.480443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 20:50:26.480455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 20:50:26.480467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 20:50:26.480478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 20:50:26.480489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 20:50:26.480500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 20:50:26.480510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 20:50:26.480521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 20:50:26.480532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 20:50:26.480542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 20:50:26.480553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 20:50:26.480564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 20:50:26.480574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 20:50:26.480588 | orchestrator | 2025-08-29 20:50:26.480600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.480638 | orchestrator | Friday 29 August 2025 20:50:20 +0000 (0:00:00.357) 0:00:01.082 ********* 2025-08-29 20:50:26.480652 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.480665 | orchestrator | 2025-08-29 20:50:26.480677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.480689 | orchestrator | Friday 29 August 2025 20:50:20 +0000 (0:00:00.322) 0:00:01.405 ********* 2025-08-29 20:50:26.480701 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.480714 | orchestrator | 2025-08-29 20:50:26.480726 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.480738 | orchestrator | Friday 29 August 2025 20:50:20 +0000 (0:00:00.165) 0:00:01.570 ********* 2025-08-29 20:50:26.480751 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.480763 | orchestrator | 2025-08-29 20:50:26.480775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.480817 | orchestrator | Friday 29 August 2025 20:50:21 +0000 (0:00:00.164) 0:00:01.734 ********* 2025-08-29 20:50:26.480830 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.480842 | orchestrator | 2025-08-29 20:50:26.480854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.480867 | orchestrator | Friday 29 August 2025 20:50:21 +0000 (0:00:00.170) 0:00:01.905 ********* 2025-08-29 20:50:26.480879 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.480891 | orchestrator | 2025-08-29 20:50:26.480903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.480915 | orchestrator | Friday 29 August 2025 20:50:21 +0000 (0:00:00.165) 0:00:02.070 ********* 2025-08-29 20:50:26.480928 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.480940 | orchestrator | 2025-08-29 20:50:26.480951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.480962 | orchestrator | Friday 29 August 2025 20:50:21 +0000 (0:00:00.190) 0:00:02.260 ********* 2025-08-29 20:50:26.480972 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.480983 | orchestrator | 2025-08-29 20:50:26.480993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.481004 | orchestrator | Friday 29 August 2025 20:50:21 +0000 (0:00:00.192) 0:00:02.452 ********* 2025-08-29 20:50:26.481015 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.481025 | orchestrator | 2025-08-29 20:50:26.481036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.481047 | orchestrator | Friday 29 August 2025 20:50:22 +0000 (0:00:00.172) 0:00:02.625 ********* 2025-08-29 20:50:26.481058 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745) 2025-08-29 20:50:26.481070 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745) 2025-08-29 20:50:26.481081 | orchestrator | 2025-08-29 20:50:26.481092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.481103 | orchestrator | Friday 29 August 2025 20:50:22 +0000 (0:00:00.326) 0:00:02.952 ********* 2025-08-29 20:50:26.481136 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4) 2025-08-29 20:50:26.481149 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4) 2025-08-29 20:50:26.481159 | orchestrator | 2025-08-29 20:50:26.481170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.481181 | orchestrator | Friday 29 August 2025 20:50:22 +0000 (0:00:00.378) 0:00:03.331 ********* 2025-08-29 20:50:26.481192 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf) 2025-08-29 20:50:26.481203 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf) 2025-08-29 20:50:26.481214 | orchestrator | 2025-08-29 20:50:26.481225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.481245 | orchestrator | Friday 29 August 2025 20:50:23 +0000 (0:00:00.476) 0:00:03.807 ********* 2025-08-29 20:50:26.481256 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346) 2025-08-29 20:50:26.481267 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346) 2025-08-29 20:50:26.481278 | orchestrator | 2025-08-29 20:50:26.481289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:26.481300 | orchestrator | Friday 29 August 2025 20:50:23 +0000 (0:00:00.608) 0:00:04.416 ********* 2025-08-29 20:50:26.481310 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 20:50:26.481321 | orchestrator | 2025-08-29 20:50:26.481332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:26.481343 | orchestrator | Friday 29 August 2025 20:50:24 +0000 (0:00:00.625) 0:00:05.042 ********* 2025-08-29 20:50:26.481354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 20:50:26.481364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 20:50:26.481375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 20:50:26.481386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 20:50:26.481417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 20:50:26.481428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 20:50:26.481439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 20:50:26.481450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 20:50:26.481460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 20:50:26.481471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 20:50:26.481482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 20:50:26.481492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 20:50:26.481508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 20:50:26.481519 | orchestrator | 2025-08-29 20:50:26.481530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:26.481541 | orchestrator | Friday 29 August 2025 20:50:24 +0000 (0:00:00.467) 0:00:05.509 ********* 2025-08-29 20:50:26.481552 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.481563 | orchestrator | 2025-08-29 20:50:26.481574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:26.481585 | orchestrator | Friday 29 August 2025 20:50:25 +0000 (0:00:00.187) 0:00:05.697 ********* 2025-08-29 20:50:26.481596 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.481607 | orchestrator | 2025-08-29 20:50:26.481617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:26.481628 | orchestrator | Friday 29 August 2025 20:50:25 +0000 (0:00:00.199) 0:00:05.897 ********* 2025-08-29 20:50:26.481639 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.481650 | orchestrator | 2025-08-29 20:50:26.481661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:26.481671 | orchestrator | Friday 29 August 2025 20:50:25 +0000 (0:00:00.189) 0:00:06.086 ********* 2025-08-29 20:50:26.481682 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.481693 | orchestrator | 2025-08-29 20:50:26.481704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:26.481715 | orchestrator | Friday 29 August 2025 20:50:25 +0000 (0:00:00.199) 0:00:06.285 ********* 2025-08-29 20:50:26.481732 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.481743 | orchestrator | 2025-08-29 20:50:26.481754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:26.481765 | orchestrator | Friday 29 August 2025 20:50:25 +0000 (0:00:00.197) 0:00:06.483 ********* 2025-08-29 20:50:26.481775 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.481786 | orchestrator | 2025-08-29 20:50:26.481818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:26.481829 | orchestrator | Friday 29 August 2025 20:50:26 +0000 (0:00:00.187) 0:00:06.671 ********* 2025-08-29 20:50:26.481840 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:26.481851 | orchestrator | 2025-08-29 20:50:26.481861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:26.481872 | orchestrator | Friday 29 August 2025 20:50:26 +0000 (0:00:00.194) 0:00:06.865 ********* 2025-08-29 20:50:26.481891 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.936215 | orchestrator | 2025-08-29 20:50:33.936328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:33.936344 | orchestrator | Friday 29 August 2025 20:50:26 +0000 (0:00:00.192) 0:00:07.057 ********* 2025-08-29 20:50:33.936357 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 20:50:33.936369 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 20:50:33.936380 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 20:50:33.936391 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 20:50:33.936402 | orchestrator | 2025-08-29 20:50:33.936413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:33.936424 | orchestrator | Friday 29 August 2025 20:50:27 +0000 (0:00:00.989) 0:00:08.047 ********* 2025-08-29 20:50:33.936435 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.936446 | orchestrator | 2025-08-29 20:50:33.936457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:33.936468 | orchestrator | Friday 29 August 2025 20:50:27 +0000 (0:00:00.197) 0:00:08.244 ********* 2025-08-29 20:50:33.936479 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.936489 | orchestrator | 2025-08-29 20:50:33.936500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:33.936511 | orchestrator | Friday 29 August 2025 20:50:27 +0000 (0:00:00.192) 0:00:08.437 ********* 2025-08-29 20:50:33.936522 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.936533 | orchestrator | 2025-08-29 20:50:33.936543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:33.936555 | orchestrator | Friday 29 August 2025 20:50:28 +0000 (0:00:00.187) 0:00:08.625 ********* 2025-08-29 20:50:33.936566 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.936577 | orchestrator | 2025-08-29 20:50:33.936588 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 20:50:33.936599 | orchestrator | Friday 29 August 2025 20:50:28 +0000 (0:00:00.199) 0:00:08.825 ********* 2025-08-29 20:50:33.936610 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.936620 | orchestrator | 2025-08-29 20:50:33.936631 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 20:50:33.936642 | orchestrator | Friday 29 August 2025 20:50:28 +0000 (0:00:00.137) 0:00:08.962 ********* 2025-08-29 20:50:33.936653 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'}}) 2025-08-29 20:50:33.936665 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79476f9b-63cb-5c74-926b-50a3eb682c43'}}) 2025-08-29 20:50:33.936675 | orchestrator | 2025-08-29 20:50:33.936687 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 20:50:33.936697 | orchestrator | Friday 29 August 2025 20:50:28 +0000 (0:00:00.184) 0:00:09.147 ********* 2025-08-29 20:50:33.936709 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'}) 2025-08-29 20:50:33.936741 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'}) 2025-08-29 20:50:33.936752 | orchestrator | 2025-08-29 20:50:33.936763 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 20:50:33.936775 | orchestrator | Friday 29 August 2025 20:50:30 +0000 (0:00:02.049) 0:00:11.196 ********* 2025-08-29 20:50:33.936788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:33.936836 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:33.936848 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.936861 | orchestrator | 2025-08-29 20:50:33.936873 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 20:50:33.936886 | orchestrator | Friday 29 August 2025 20:50:30 +0000 (0:00:00.117) 0:00:11.314 ********* 2025-08-29 20:50:33.936898 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'}) 2025-08-29 20:50:33.936911 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'}) 2025-08-29 20:50:33.936923 | orchestrator | 2025-08-29 20:50:33.936934 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 20:50:33.936947 | orchestrator | Friday 29 August 2025 20:50:32 +0000 (0:00:01.452) 0:00:12.766 ********* 2025-08-29 20:50:33.936959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:33.936971 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:33.936984 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.936996 | orchestrator | 2025-08-29 20:50:33.937008 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 20:50:33.937021 | orchestrator | Friday 29 August 2025 20:50:32 +0000 (0:00:00.139) 0:00:12.906 ********* 2025-08-29 20:50:33.937032 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.937044 | orchestrator | 2025-08-29 20:50:33.937056 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 20:50:33.937087 | orchestrator | Friday 29 August 2025 20:50:32 +0000 (0:00:00.126) 0:00:13.032 ********* 2025-08-29 20:50:33.937100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:33.937113 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:33.937125 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.937138 | orchestrator | 2025-08-29 20:50:33.937150 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 20:50:33.937161 | orchestrator | Friday 29 August 2025 20:50:32 +0000 (0:00:00.252) 0:00:13.285 ********* 2025-08-29 20:50:33.937172 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.937183 | orchestrator | 2025-08-29 20:50:33.937194 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 20:50:33.937205 | orchestrator | Friday 29 August 2025 20:50:32 +0000 (0:00:00.121) 0:00:13.406 ********* 2025-08-29 20:50:33.937216 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:33.937236 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:33.937247 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.937258 | orchestrator | 2025-08-29 20:50:33.937269 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 20:50:33.937280 | orchestrator | Friday 29 August 2025 20:50:32 +0000 (0:00:00.130) 0:00:13.536 ********* 2025-08-29 20:50:33.937290 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.937301 | orchestrator | 2025-08-29 20:50:33.937312 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 20:50:33.937323 | orchestrator | Friday 29 August 2025 20:50:33 +0000 (0:00:00.121) 0:00:13.658 ********* 2025-08-29 20:50:33.937334 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:33.937345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:33.937356 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.937366 | orchestrator | 2025-08-29 20:50:33.937377 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 20:50:33.937388 | orchestrator | Friday 29 August 2025 20:50:33 +0000 (0:00:00.125) 0:00:13.784 ********* 2025-08-29 20:50:33.937399 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:50:33.937410 | orchestrator | 2025-08-29 20:50:33.937421 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 20:50:33.937432 | orchestrator | Friday 29 August 2025 20:50:33 +0000 (0:00:00.114) 0:00:13.899 ********* 2025-08-29 20:50:33.937456 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:33.937471 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:33.937482 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.937493 | orchestrator | 2025-08-29 20:50:33.937504 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 20:50:33.937515 | orchestrator | Friday 29 August 2025 20:50:33 +0000 (0:00:00.135) 0:00:14.034 ********* 2025-08-29 20:50:33.937526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:33.937537 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:33.937547 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.937558 | orchestrator | 2025-08-29 20:50:33.937569 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 20:50:33.937580 | orchestrator | Friday 29 August 2025 20:50:33 +0000 (0:00:00.125) 0:00:14.159 ********* 2025-08-29 20:50:33.937591 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:33.937601 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:33.937612 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.937623 | orchestrator | 2025-08-29 20:50:33.937634 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 20:50:33.937645 | orchestrator | Friday 29 August 2025 20:50:33 +0000 (0:00:00.129) 0:00:14.289 ********* 2025-08-29 20:50:33.937656 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.937667 | orchestrator | 2025-08-29 20:50:33.937677 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 20:50:33.937694 | orchestrator | Friday 29 August 2025 20:50:33 +0000 (0:00:00.116) 0:00:14.406 ********* 2025-08-29 20:50:33.937705 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:33.937716 | orchestrator | 2025-08-29 20:50:33.937733 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 20:50:39.487773 | orchestrator | Friday 29 August 2025 20:50:33 +0000 (0:00:00.109) 0:00:14.515 ********* 2025-08-29 20:50:39.487904 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.487918 | orchestrator | 2025-08-29 20:50:39.487928 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 20:50:39.487938 | orchestrator | Friday 29 August 2025 20:50:34 +0000 (0:00:00.140) 0:00:14.656 ********* 2025-08-29 20:50:39.487952 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 20:50:39.487968 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 20:50:39.487983 | orchestrator | } 2025-08-29 20:50:39.487997 | orchestrator | 2025-08-29 20:50:39.488011 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 20:50:39.488025 | orchestrator | Friday 29 August 2025 20:50:34 +0000 (0:00:00.245) 0:00:14.902 ********* 2025-08-29 20:50:39.488039 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 20:50:39.488053 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 20:50:39.488067 | orchestrator | } 2025-08-29 20:50:39.488080 | orchestrator | 2025-08-29 20:50:39.488094 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 20:50:39.488108 | orchestrator | Friday 29 August 2025 20:50:34 +0000 (0:00:00.143) 0:00:15.045 ********* 2025-08-29 20:50:39.488122 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 20:50:39.488135 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 20:50:39.488149 | orchestrator | } 2025-08-29 20:50:39.488163 | orchestrator | 2025-08-29 20:50:39.488177 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 20:50:39.488192 | orchestrator | Friday 29 August 2025 20:50:34 +0000 (0:00:00.116) 0:00:15.162 ********* 2025-08-29 20:50:39.488206 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:50:39.488222 | orchestrator | 2025-08-29 20:50:39.488237 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 20:50:39.488251 | orchestrator | Friday 29 August 2025 20:50:35 +0000 (0:00:00.626) 0:00:15.788 ********* 2025-08-29 20:50:39.488266 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:50:39.488281 | orchestrator | 2025-08-29 20:50:39.488296 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 20:50:39.488310 | orchestrator | Friday 29 August 2025 20:50:35 +0000 (0:00:00.479) 0:00:16.267 ********* 2025-08-29 20:50:39.488325 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:50:39.488340 | orchestrator | 2025-08-29 20:50:39.488354 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 20:50:39.488370 | orchestrator | Friday 29 August 2025 20:50:36 +0000 (0:00:00.498) 0:00:16.765 ********* 2025-08-29 20:50:39.488384 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:50:39.488399 | orchestrator | 2025-08-29 20:50:39.488413 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 20:50:39.488429 | orchestrator | Friday 29 August 2025 20:50:36 +0000 (0:00:00.174) 0:00:16.939 ********* 2025-08-29 20:50:39.488444 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.488459 | orchestrator | 2025-08-29 20:50:39.488473 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 20:50:39.488489 | orchestrator | Friday 29 August 2025 20:50:36 +0000 (0:00:00.107) 0:00:17.046 ********* 2025-08-29 20:50:39.488504 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.488518 | orchestrator | 2025-08-29 20:50:39.488533 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 20:50:39.488547 | orchestrator | Friday 29 August 2025 20:50:36 +0000 (0:00:00.094) 0:00:17.141 ********* 2025-08-29 20:50:39.488562 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 20:50:39.488600 | orchestrator |  "vgs_report": { 2025-08-29 20:50:39.488611 | orchestrator |  "vg": [] 2025-08-29 20:50:39.488620 | orchestrator |  } 2025-08-29 20:50:39.488629 | orchestrator | } 2025-08-29 20:50:39.488649 | orchestrator | 2025-08-29 20:50:39.488672 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 20:50:39.488681 | orchestrator | Friday 29 August 2025 20:50:36 +0000 (0:00:00.143) 0:00:17.284 ********* 2025-08-29 20:50:39.488690 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.488698 | orchestrator | 2025-08-29 20:50:39.488706 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 20:50:39.488714 | orchestrator | Friday 29 August 2025 20:50:36 +0000 (0:00:00.121) 0:00:17.406 ********* 2025-08-29 20:50:39.488722 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.488730 | orchestrator | 2025-08-29 20:50:39.488737 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 20:50:39.488746 | orchestrator | Friday 29 August 2025 20:50:36 +0000 (0:00:00.107) 0:00:17.514 ********* 2025-08-29 20:50:39.488760 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.488773 | orchestrator | 2025-08-29 20:50:39.488788 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 20:50:39.488825 | orchestrator | Friday 29 August 2025 20:50:37 +0000 (0:00:00.232) 0:00:17.747 ********* 2025-08-29 20:50:39.488839 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.488854 | orchestrator | 2025-08-29 20:50:39.488868 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 20:50:39.488883 | orchestrator | Friday 29 August 2025 20:50:37 +0000 (0:00:00.127) 0:00:17.874 ********* 2025-08-29 20:50:39.488898 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.488912 | orchestrator | 2025-08-29 20:50:39.488926 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 20:50:39.488940 | orchestrator | Friday 29 August 2025 20:50:37 +0000 (0:00:00.123) 0:00:17.998 ********* 2025-08-29 20:50:39.488954 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.488968 | orchestrator | 2025-08-29 20:50:39.488982 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 20:50:39.488997 | orchestrator | Friday 29 August 2025 20:50:37 +0000 (0:00:00.131) 0:00:18.130 ********* 2025-08-29 20:50:39.489009 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489024 | orchestrator | 2025-08-29 20:50:39.489039 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 20:50:39.489053 | orchestrator | Friday 29 August 2025 20:50:37 +0000 (0:00:00.104) 0:00:18.234 ********* 2025-08-29 20:50:39.489068 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489080 | orchestrator | 2025-08-29 20:50:39.489094 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 20:50:39.489128 | orchestrator | Friday 29 August 2025 20:50:37 +0000 (0:00:00.140) 0:00:18.375 ********* 2025-08-29 20:50:39.489142 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489156 | orchestrator | 2025-08-29 20:50:39.489169 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 20:50:39.489183 | orchestrator | Friday 29 August 2025 20:50:37 +0000 (0:00:00.127) 0:00:18.502 ********* 2025-08-29 20:50:39.489197 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489210 | orchestrator | 2025-08-29 20:50:39.489224 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 20:50:39.489237 | orchestrator | Friday 29 August 2025 20:50:38 +0000 (0:00:00.117) 0:00:18.620 ********* 2025-08-29 20:50:39.489251 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489266 | orchestrator | 2025-08-29 20:50:39.489280 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 20:50:39.489293 | orchestrator | Friday 29 August 2025 20:50:38 +0000 (0:00:00.119) 0:00:18.739 ********* 2025-08-29 20:50:39.489308 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489323 | orchestrator | 2025-08-29 20:50:39.489337 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 20:50:39.489365 | orchestrator | Friday 29 August 2025 20:50:38 +0000 (0:00:00.120) 0:00:18.859 ********* 2025-08-29 20:50:39.489380 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489394 | orchestrator | 2025-08-29 20:50:39.489409 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 20:50:39.489422 | orchestrator | Friday 29 August 2025 20:50:38 +0000 (0:00:00.120) 0:00:18.980 ********* 2025-08-29 20:50:39.489436 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489451 | orchestrator | 2025-08-29 20:50:39.489465 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 20:50:39.489479 | orchestrator | Friday 29 August 2025 20:50:38 +0000 (0:00:00.120) 0:00:19.101 ********* 2025-08-29 20:50:39.489495 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:39.489511 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:39.489526 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489540 | orchestrator | 2025-08-29 20:50:39.489555 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 20:50:39.489571 | orchestrator | Friday 29 August 2025 20:50:38 +0000 (0:00:00.144) 0:00:19.246 ********* 2025-08-29 20:50:39.489585 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:39.489600 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:39.489614 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489627 | orchestrator | 2025-08-29 20:50:39.489642 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 20:50:39.489656 | orchestrator | Friday 29 August 2025 20:50:38 +0000 (0:00:00.272) 0:00:19.518 ********* 2025-08-29 20:50:39.489671 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:39.489684 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:39.489697 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489711 | orchestrator | 2025-08-29 20:50:39.489724 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 20:50:39.489738 | orchestrator | Friday 29 August 2025 20:50:39 +0000 (0:00:00.135) 0:00:19.653 ********* 2025-08-29 20:50:39.489751 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:39.489764 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:39.489778 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489791 | orchestrator | 2025-08-29 20:50:39.489823 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 20:50:39.489837 | orchestrator | Friday 29 August 2025 20:50:39 +0000 (0:00:00.126) 0:00:19.779 ********* 2025-08-29 20:50:39.489851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:39.489864 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:39.489878 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:39.489893 | orchestrator | 2025-08-29 20:50:39.489907 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 20:50:39.489931 | orchestrator | Friday 29 August 2025 20:50:39 +0000 (0:00:00.141) 0:00:19.921 ********* 2025-08-29 20:50:39.489954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:39.489979 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:44.365914 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:44.366071 | orchestrator | 2025-08-29 20:50:44.366100 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 20:50:44.366113 | orchestrator | Friday 29 August 2025 20:50:39 +0000 (0:00:00.146) 0:00:20.067 ********* 2025-08-29 20:50:44.366124 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:44.366136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:44.366147 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:44.366158 | orchestrator | 2025-08-29 20:50:44.366169 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 20:50:44.366179 | orchestrator | Friday 29 August 2025 20:50:39 +0000 (0:00:00.147) 0:00:20.215 ********* 2025-08-29 20:50:44.366190 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:44.366201 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:44.366211 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:44.366222 | orchestrator | 2025-08-29 20:50:44.366233 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 20:50:44.366244 | orchestrator | Friday 29 August 2025 20:50:39 +0000 (0:00:00.144) 0:00:20.359 ********* 2025-08-29 20:50:44.366255 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:50:44.366266 | orchestrator | 2025-08-29 20:50:44.366277 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 20:50:44.366287 | orchestrator | Friday 29 August 2025 20:50:40 +0000 (0:00:00.508) 0:00:20.867 ********* 2025-08-29 20:50:44.366298 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:50:44.366309 | orchestrator | 2025-08-29 20:50:44.366319 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 20:50:44.366330 | orchestrator | Friday 29 August 2025 20:50:40 +0000 (0:00:00.547) 0:00:21.414 ********* 2025-08-29 20:50:44.366340 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:50:44.366351 | orchestrator | 2025-08-29 20:50:44.366362 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 20:50:44.366372 | orchestrator | Friday 29 August 2025 20:50:40 +0000 (0:00:00.114) 0:00:21.529 ********* 2025-08-29 20:50:44.366383 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'vg_name': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'}) 2025-08-29 20:50:44.366395 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'vg_name': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'}) 2025-08-29 20:50:44.366405 | orchestrator | 2025-08-29 20:50:44.366432 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 20:50:44.366446 | orchestrator | Friday 29 August 2025 20:50:41 +0000 (0:00:00.142) 0:00:21.672 ********* 2025-08-29 20:50:44.366458 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:44.366470 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:44.366505 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:44.366518 | orchestrator | 2025-08-29 20:50:44.366529 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 20:50:44.366542 | orchestrator | Friday 29 August 2025 20:50:41 +0000 (0:00:00.125) 0:00:21.798 ********* 2025-08-29 20:50:44.366553 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:44.366565 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:44.366577 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:44.366589 | orchestrator | 2025-08-29 20:50:44.366600 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 20:50:44.366612 | orchestrator | Friday 29 August 2025 20:50:41 +0000 (0:00:00.291) 0:00:22.089 ********* 2025-08-29 20:50:44.366625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'})  2025-08-29 20:50:44.366637 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'})  2025-08-29 20:50:44.366649 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:50:44.366661 | orchestrator | 2025-08-29 20:50:44.366673 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 20:50:44.366685 | orchestrator | Friday 29 August 2025 20:50:41 +0000 (0:00:00.145) 0:00:22.234 ********* 2025-08-29 20:50:44.366696 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 20:50:44.366709 | orchestrator |  "lvm_report": { 2025-08-29 20:50:44.366722 | orchestrator |  "lv": [ 2025-08-29 20:50:44.366734 | orchestrator |  { 2025-08-29 20:50:44.366763 | orchestrator |  "lv_name": "osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0", 2025-08-29 20:50:44.366776 | orchestrator |  "vg_name": "ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0" 2025-08-29 20:50:44.366787 | orchestrator |  }, 2025-08-29 20:50:44.366821 | orchestrator |  { 2025-08-29 20:50:44.366835 | orchestrator |  "lv_name": "osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43", 2025-08-29 20:50:44.366846 | orchestrator |  "vg_name": "ceph-79476f9b-63cb-5c74-926b-50a3eb682c43" 2025-08-29 20:50:44.366856 | orchestrator |  } 2025-08-29 20:50:44.366867 | orchestrator |  ], 2025-08-29 20:50:44.366878 | orchestrator |  "pv": [ 2025-08-29 20:50:44.366889 | orchestrator |  { 2025-08-29 20:50:44.366899 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 20:50:44.366910 | orchestrator |  "vg_name": "ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0" 2025-08-29 20:50:44.366921 | orchestrator |  }, 2025-08-29 20:50:44.366932 | orchestrator |  { 2025-08-29 20:50:44.366942 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 20:50:44.366953 | orchestrator |  "vg_name": "ceph-79476f9b-63cb-5c74-926b-50a3eb682c43" 2025-08-29 20:50:44.366963 | orchestrator |  } 2025-08-29 20:50:44.366974 | orchestrator |  ] 2025-08-29 20:50:44.366985 | orchestrator |  } 2025-08-29 20:50:44.366996 | orchestrator | } 2025-08-29 20:50:44.367007 | orchestrator | 2025-08-29 20:50:44.367017 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 20:50:44.367028 | orchestrator | 2025-08-29 20:50:44.367039 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 20:50:44.367049 | orchestrator | Friday 29 August 2025 20:50:41 +0000 (0:00:00.260) 0:00:22.495 ********* 2025-08-29 20:50:44.367060 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 20:50:44.367071 | orchestrator | 2025-08-29 20:50:44.367090 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 20:50:44.367101 | orchestrator | Friday 29 August 2025 20:50:42 +0000 (0:00:00.236) 0:00:22.732 ********* 2025-08-29 20:50:44.367111 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:50:44.367122 | orchestrator | 2025-08-29 20:50:44.367133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:44.367143 | orchestrator | Friday 29 August 2025 20:50:42 +0000 (0:00:00.213) 0:00:22.945 ********* 2025-08-29 20:50:44.367154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 20:50:44.367164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 20:50:44.367175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 20:50:44.367186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 20:50:44.367196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 20:50:44.367207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 20:50:44.367218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 20:50:44.367228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 20:50:44.367244 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 20:50:44.367255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 20:50:44.367266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 20:50:44.367276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 20:50:44.367287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 20:50:44.367298 | orchestrator | 2025-08-29 20:50:44.367309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:44.367319 | orchestrator | Friday 29 August 2025 20:50:42 +0000 (0:00:00.424) 0:00:23.370 ********* 2025-08-29 20:50:44.367330 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:44.367341 | orchestrator | 2025-08-29 20:50:44.367351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:44.367362 | orchestrator | Friday 29 August 2025 20:50:42 +0000 (0:00:00.186) 0:00:23.557 ********* 2025-08-29 20:50:44.367373 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:44.367383 | orchestrator | 2025-08-29 20:50:44.367394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:44.367405 | orchestrator | Friday 29 August 2025 20:50:43 +0000 (0:00:00.180) 0:00:23.737 ********* 2025-08-29 20:50:44.367415 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:44.367426 | orchestrator | 2025-08-29 20:50:44.367437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:44.367447 | orchestrator | Friday 29 August 2025 20:50:43 +0000 (0:00:00.183) 0:00:23.920 ********* 2025-08-29 20:50:44.367458 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:44.367469 | orchestrator | 2025-08-29 20:50:44.367479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:44.367490 | orchestrator | Friday 29 August 2025 20:50:43 +0000 (0:00:00.488) 0:00:24.409 ********* 2025-08-29 20:50:44.367501 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:44.367511 | orchestrator | 2025-08-29 20:50:44.367522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:44.367532 | orchestrator | Friday 29 August 2025 20:50:43 +0000 (0:00:00.173) 0:00:24.582 ********* 2025-08-29 20:50:44.367543 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:44.367554 | orchestrator | 2025-08-29 20:50:44.367564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:44.367581 | orchestrator | Friday 29 August 2025 20:50:44 +0000 (0:00:00.177) 0:00:24.759 ********* 2025-08-29 20:50:44.367592 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:44.367603 | orchestrator | 2025-08-29 20:50:44.367621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:53.958238 | orchestrator | Friday 29 August 2025 20:50:44 +0000 (0:00:00.185) 0:00:24.945 ********* 2025-08-29 20:50:53.958303 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.958313 | orchestrator | 2025-08-29 20:50:53.958321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:53.958329 | orchestrator | Friday 29 August 2025 20:50:44 +0000 (0:00:00.188) 0:00:25.133 ********* 2025-08-29 20:50:53.958335 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0) 2025-08-29 20:50:53.958343 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0) 2025-08-29 20:50:53.958350 | orchestrator | 2025-08-29 20:50:53.958356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:53.958363 | orchestrator | Friday 29 August 2025 20:50:44 +0000 (0:00:00.400) 0:00:25.534 ********* 2025-08-29 20:50:53.958370 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2) 2025-08-29 20:50:53.958376 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2) 2025-08-29 20:50:53.958383 | orchestrator | 2025-08-29 20:50:53.958390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:53.958396 | orchestrator | Friday 29 August 2025 20:50:45 +0000 (0:00:00.384) 0:00:25.919 ********* 2025-08-29 20:50:53.958403 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042) 2025-08-29 20:50:53.958410 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042) 2025-08-29 20:50:53.958417 | orchestrator | 2025-08-29 20:50:53.958423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:53.958430 | orchestrator | Friday 29 August 2025 20:50:45 +0000 (0:00:00.414) 0:00:26.333 ********* 2025-08-29 20:50:53.958437 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df) 2025-08-29 20:50:53.958443 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df) 2025-08-29 20:50:53.958450 | orchestrator | 2025-08-29 20:50:53.958457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:50:53.958463 | orchestrator | Friday 29 August 2025 20:50:46 +0000 (0:00:00.411) 0:00:26.745 ********* 2025-08-29 20:50:53.958470 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 20:50:53.958477 | orchestrator | 2025-08-29 20:50:53.958483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.958490 | orchestrator | Friday 29 August 2025 20:50:46 +0000 (0:00:00.331) 0:00:27.077 ********* 2025-08-29 20:50:53.958497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 20:50:53.958504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 20:50:53.958510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 20:50:53.958517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 20:50:53.958523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 20:50:53.958530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 20:50:53.958549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 20:50:53.958570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 20:50:53.958577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 20:50:53.958583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 20:50:53.958590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 20:50:53.958596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 20:50:53.958614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 20:50:53.958621 | orchestrator | 2025-08-29 20:50:53.958634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.958641 | orchestrator | Friday 29 August 2025 20:50:47 +0000 (0:00:00.524) 0:00:27.601 ********* 2025-08-29 20:50:53.958648 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.958655 | orchestrator | 2025-08-29 20:50:53.958661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.958668 | orchestrator | Friday 29 August 2025 20:50:47 +0000 (0:00:00.195) 0:00:27.797 ********* 2025-08-29 20:50:53.958675 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.958681 | orchestrator | 2025-08-29 20:50:53.958688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.958696 | orchestrator | Friday 29 August 2025 20:50:47 +0000 (0:00:00.217) 0:00:28.014 ********* 2025-08-29 20:50:53.958708 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.958718 | orchestrator | 2025-08-29 20:50:53.958730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.958742 | orchestrator | Friday 29 August 2025 20:50:47 +0000 (0:00:00.196) 0:00:28.211 ********* 2025-08-29 20:50:53.958753 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.958762 | orchestrator | 2025-08-29 20:50:53.958781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.958788 | orchestrator | Friday 29 August 2025 20:50:47 +0000 (0:00:00.202) 0:00:28.414 ********* 2025-08-29 20:50:53.958795 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.958817 | orchestrator | 2025-08-29 20:50:53.958825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.958833 | orchestrator | Friday 29 August 2025 20:50:47 +0000 (0:00:00.167) 0:00:28.582 ********* 2025-08-29 20:50:53.958840 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.958848 | orchestrator | 2025-08-29 20:50:53.958855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.958862 | orchestrator | Friday 29 August 2025 20:50:48 +0000 (0:00:00.214) 0:00:28.796 ********* 2025-08-29 20:50:53.958870 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.958877 | orchestrator | 2025-08-29 20:50:53.958885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.958892 | orchestrator | Friday 29 August 2025 20:50:48 +0000 (0:00:00.204) 0:00:29.001 ********* 2025-08-29 20:50:53.958900 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.958907 | orchestrator | 2025-08-29 20:50:53.958915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.958922 | orchestrator | Friday 29 August 2025 20:50:48 +0000 (0:00:00.181) 0:00:29.182 ********* 2025-08-29 20:50:53.958930 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 20:50:53.958937 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 20:50:53.958945 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 20:50:53.958953 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 20:50:53.958960 | orchestrator | 2025-08-29 20:50:53.958968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.958976 | orchestrator | Friday 29 August 2025 20:50:49 +0000 (0:00:00.693) 0:00:29.875 ********* 2025-08-29 20:50:53.958990 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.958998 | orchestrator | 2025-08-29 20:50:53.959006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.959014 | orchestrator | Friday 29 August 2025 20:50:49 +0000 (0:00:00.172) 0:00:30.048 ********* 2025-08-29 20:50:53.959021 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.959029 | orchestrator | 2025-08-29 20:50:53.959036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.959044 | orchestrator | Friday 29 August 2025 20:50:49 +0000 (0:00:00.198) 0:00:30.247 ********* 2025-08-29 20:50:53.959051 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.959059 | orchestrator | 2025-08-29 20:50:53.959066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:50:53.959074 | orchestrator | Friday 29 August 2025 20:50:50 +0000 (0:00:00.509) 0:00:30.757 ********* 2025-08-29 20:50:53.959081 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.959088 | orchestrator | 2025-08-29 20:50:53.959096 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 20:50:53.959103 | orchestrator | Friday 29 August 2025 20:50:50 +0000 (0:00:00.211) 0:00:30.968 ********* 2025-08-29 20:50:53.959110 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.959118 | orchestrator | 2025-08-29 20:50:53.959129 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 20:50:53.959137 | orchestrator | Friday 29 August 2025 20:50:50 +0000 (0:00:00.126) 0:00:31.095 ********* 2025-08-29 20:50:53.959145 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76a76f98-f10a-56c2-85c8-c111ab4c87c6'}}) 2025-08-29 20:50:53.959152 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f3fee7d3-6bcf-515f-a6c3-caef0862fd99'}}) 2025-08-29 20:50:53.959159 | orchestrator | 2025-08-29 20:50:53.959165 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 20:50:53.959172 | orchestrator | Friday 29 August 2025 20:50:50 +0000 (0:00:00.179) 0:00:31.275 ********* 2025-08-29 20:50:53.959179 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'}) 2025-08-29 20:50:53.959186 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'}) 2025-08-29 20:50:53.959193 | orchestrator | 2025-08-29 20:50:53.959200 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 20:50:53.959206 | orchestrator | Friday 29 August 2025 20:50:52 +0000 (0:00:01.854) 0:00:33.129 ********* 2025-08-29 20:50:53.959213 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:50:53.959220 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:50:53.959227 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:53.959234 | orchestrator | 2025-08-29 20:50:53.959240 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 20:50:53.959247 | orchestrator | Friday 29 August 2025 20:50:52 +0000 (0:00:00.124) 0:00:33.254 ********* 2025-08-29 20:50:53.959254 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'}) 2025-08-29 20:50:53.959260 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'}) 2025-08-29 20:50:53.959267 | orchestrator | 2025-08-29 20:50:53.959278 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 20:50:58.931935 | orchestrator | Friday 29 August 2025 20:50:53 +0000 (0:00:01.281) 0:00:34.535 ********* 2025-08-29 20:50:58.932054 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:50:58.932071 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:50:58.932083 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932095 | orchestrator | 2025-08-29 20:50:58.932107 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 20:50:58.932118 | orchestrator | Friday 29 August 2025 20:50:54 +0000 (0:00:00.145) 0:00:34.681 ********* 2025-08-29 20:50:58.932129 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932139 | orchestrator | 2025-08-29 20:50:58.932150 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 20:50:58.932161 | orchestrator | Friday 29 August 2025 20:50:54 +0000 (0:00:00.131) 0:00:34.812 ********* 2025-08-29 20:50:58.932172 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:50:58.932183 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:50:58.932194 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932205 | orchestrator | 2025-08-29 20:50:58.932215 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 20:50:58.932226 | orchestrator | Friday 29 August 2025 20:50:54 +0000 (0:00:00.149) 0:00:34.961 ********* 2025-08-29 20:50:58.932237 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932247 | orchestrator | 2025-08-29 20:50:58.932258 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 20:50:58.932269 | orchestrator | Friday 29 August 2025 20:50:54 +0000 (0:00:00.124) 0:00:35.086 ********* 2025-08-29 20:50:58.932298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:50:58.932309 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:50:58.932320 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932331 | orchestrator | 2025-08-29 20:50:58.932342 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 20:50:58.932352 | orchestrator | Friday 29 August 2025 20:50:54 +0000 (0:00:00.134) 0:00:35.221 ********* 2025-08-29 20:50:58.932363 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932373 | orchestrator | 2025-08-29 20:50:58.932395 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 20:50:58.932407 | orchestrator | Friday 29 August 2025 20:50:54 +0000 (0:00:00.265) 0:00:35.487 ********* 2025-08-29 20:50:58.932423 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:50:58.932443 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:50:58.932461 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932480 | orchestrator | 2025-08-29 20:50:58.932499 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 20:50:58.932518 | orchestrator | Friday 29 August 2025 20:50:55 +0000 (0:00:00.135) 0:00:35.622 ********* 2025-08-29 20:50:58.932537 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:50:58.932559 | orchestrator | 2025-08-29 20:50:58.932578 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 20:50:58.932598 | orchestrator | Friday 29 August 2025 20:50:55 +0000 (0:00:00.129) 0:00:35.751 ********* 2025-08-29 20:50:58.932625 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:50:58.932638 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:50:58.932651 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932663 | orchestrator | 2025-08-29 20:50:58.932675 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 20:50:58.932687 | orchestrator | Friday 29 August 2025 20:50:55 +0000 (0:00:00.127) 0:00:35.879 ********* 2025-08-29 20:50:58.932699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:50:58.932711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:50:58.932723 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932735 | orchestrator | 2025-08-29 20:50:58.932747 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 20:50:58.932760 | orchestrator | Friday 29 August 2025 20:50:55 +0000 (0:00:00.143) 0:00:36.022 ********* 2025-08-29 20:50:58.932789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:50:58.932802 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:50:58.932843 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932855 | orchestrator | 2025-08-29 20:50:58.932865 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 20:50:58.932876 | orchestrator | Friday 29 August 2025 20:50:55 +0000 (0:00:00.140) 0:00:36.163 ********* 2025-08-29 20:50:58.932887 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932898 | orchestrator | 2025-08-29 20:50:58.932908 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 20:50:58.932919 | orchestrator | Friday 29 August 2025 20:50:55 +0000 (0:00:00.137) 0:00:36.300 ********* 2025-08-29 20:50:58.932930 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932940 | orchestrator | 2025-08-29 20:50:58.932951 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 20:50:58.932962 | orchestrator | Friday 29 August 2025 20:50:55 +0000 (0:00:00.137) 0:00:36.438 ********* 2025-08-29 20:50:58.932972 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.932983 | orchestrator | 2025-08-29 20:50:58.932994 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 20:50:58.933004 | orchestrator | Friday 29 August 2025 20:50:55 +0000 (0:00:00.135) 0:00:36.573 ********* 2025-08-29 20:50:58.933015 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 20:50:58.933026 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 20:50:58.933037 | orchestrator | } 2025-08-29 20:50:58.933048 | orchestrator | 2025-08-29 20:50:58.933058 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 20:50:58.933069 | orchestrator | Friday 29 August 2025 20:50:56 +0000 (0:00:00.131) 0:00:36.705 ********* 2025-08-29 20:50:58.933080 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 20:50:58.933090 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 20:50:58.933101 | orchestrator | } 2025-08-29 20:50:58.933112 | orchestrator | 2025-08-29 20:50:58.933123 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 20:50:58.933133 | orchestrator | Friday 29 August 2025 20:50:56 +0000 (0:00:00.125) 0:00:36.831 ********* 2025-08-29 20:50:58.933144 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 20:50:58.933155 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 20:50:58.933166 | orchestrator | } 2025-08-29 20:50:58.933185 | orchestrator | 2025-08-29 20:50:58.933196 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 20:50:58.933206 | orchestrator | Friday 29 August 2025 20:50:56 +0000 (0:00:00.124) 0:00:36.955 ********* 2025-08-29 20:50:58.933217 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:50:58.933228 | orchestrator | 2025-08-29 20:50:58.933238 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 20:50:58.933249 | orchestrator | Friday 29 August 2025 20:50:56 +0000 (0:00:00.589) 0:00:37.544 ********* 2025-08-29 20:50:58.933260 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:50:58.933270 | orchestrator | 2025-08-29 20:50:58.933281 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 20:50:58.933292 | orchestrator | Friday 29 August 2025 20:50:57 +0000 (0:00:00.498) 0:00:38.043 ********* 2025-08-29 20:50:58.933303 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:50:58.933314 | orchestrator | 2025-08-29 20:50:58.933324 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 20:50:58.933335 | orchestrator | Friday 29 August 2025 20:50:57 +0000 (0:00:00.495) 0:00:38.538 ********* 2025-08-29 20:50:58.933346 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:50:58.933356 | orchestrator | 2025-08-29 20:50:58.933367 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 20:50:58.933378 | orchestrator | Friday 29 August 2025 20:50:58 +0000 (0:00:00.132) 0:00:38.671 ********* 2025-08-29 20:50:58.933388 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.933399 | orchestrator | 2025-08-29 20:50:58.933410 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 20:50:58.933429 | orchestrator | Friday 29 August 2025 20:50:58 +0000 (0:00:00.096) 0:00:38.768 ********* 2025-08-29 20:50:58.933440 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.933450 | orchestrator | 2025-08-29 20:50:58.933461 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 20:50:58.933472 | orchestrator | Friday 29 August 2025 20:50:58 +0000 (0:00:00.108) 0:00:38.876 ********* 2025-08-29 20:50:58.933482 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 20:50:58.933494 | orchestrator |  "vgs_report": { 2025-08-29 20:50:58.933505 | orchestrator |  "vg": [] 2025-08-29 20:50:58.933516 | orchestrator |  } 2025-08-29 20:50:58.933527 | orchestrator | } 2025-08-29 20:50:58.933538 | orchestrator | 2025-08-29 20:50:58.933549 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 20:50:58.933559 | orchestrator | Friday 29 August 2025 20:50:58 +0000 (0:00:00.133) 0:00:39.010 ********* 2025-08-29 20:50:58.933572 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.933591 | orchestrator | 2025-08-29 20:50:58.933610 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 20:50:58.933628 | orchestrator | Friday 29 August 2025 20:50:58 +0000 (0:00:00.123) 0:00:39.133 ********* 2025-08-29 20:50:58.933645 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.933664 | orchestrator | 2025-08-29 20:50:58.933682 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 20:50:58.933701 | orchestrator | Friday 29 August 2025 20:50:58 +0000 (0:00:00.123) 0:00:39.256 ********* 2025-08-29 20:50:58.933721 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.933739 | orchestrator | 2025-08-29 20:50:58.933755 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 20:50:58.933766 | orchestrator | Friday 29 August 2025 20:50:58 +0000 (0:00:00.127) 0:00:39.384 ********* 2025-08-29 20:50:58.933777 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:50:58.933787 | orchestrator | 2025-08-29 20:50:58.933798 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 20:50:58.933880 | orchestrator | Friday 29 August 2025 20:50:58 +0000 (0:00:00.126) 0:00:39.510 ********* 2025-08-29 20:51:03.113194 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113284 | orchestrator | 2025-08-29 20:51:03.113300 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 20:51:03.113337 | orchestrator | Friday 29 August 2025 20:50:59 +0000 (0:00:00.113) 0:00:39.624 ********* 2025-08-29 20:51:03.113349 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113359 | orchestrator | 2025-08-29 20:51:03.113371 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 20:51:03.113381 | orchestrator | Friday 29 August 2025 20:50:59 +0000 (0:00:00.251) 0:00:39.876 ********* 2025-08-29 20:51:03.113392 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113403 | orchestrator | 2025-08-29 20:51:03.113413 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 20:51:03.113424 | orchestrator | Friday 29 August 2025 20:50:59 +0000 (0:00:00.110) 0:00:39.986 ********* 2025-08-29 20:51:03.113435 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113445 | orchestrator | 2025-08-29 20:51:03.113456 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 20:51:03.113467 | orchestrator | Friday 29 August 2025 20:50:59 +0000 (0:00:00.124) 0:00:40.111 ********* 2025-08-29 20:51:03.113477 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113488 | orchestrator | 2025-08-29 20:51:03.113498 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 20:51:03.113509 | orchestrator | Friday 29 August 2025 20:50:59 +0000 (0:00:00.122) 0:00:40.233 ********* 2025-08-29 20:51:03.113520 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113530 | orchestrator | 2025-08-29 20:51:03.113541 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 20:51:03.113552 | orchestrator | Friday 29 August 2025 20:50:59 +0000 (0:00:00.131) 0:00:40.364 ********* 2025-08-29 20:51:03.113562 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113573 | orchestrator | 2025-08-29 20:51:03.113584 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 20:51:03.113594 | orchestrator | Friday 29 August 2025 20:50:59 +0000 (0:00:00.121) 0:00:40.486 ********* 2025-08-29 20:51:03.113605 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113615 | orchestrator | 2025-08-29 20:51:03.113626 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 20:51:03.113637 | orchestrator | Friday 29 August 2025 20:51:00 +0000 (0:00:00.125) 0:00:40.611 ********* 2025-08-29 20:51:03.113647 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113658 | orchestrator | 2025-08-29 20:51:03.113669 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 20:51:03.113679 | orchestrator | Friday 29 August 2025 20:51:00 +0000 (0:00:00.129) 0:00:40.741 ********* 2025-08-29 20:51:03.113690 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113700 | orchestrator | 2025-08-29 20:51:03.113711 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 20:51:03.113722 | orchestrator | Friday 29 August 2025 20:51:00 +0000 (0:00:00.113) 0:00:40.855 ********* 2025-08-29 20:51:03.113747 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:51:03.113761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:51:03.113774 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113787 | orchestrator | 2025-08-29 20:51:03.113799 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 20:51:03.113839 | orchestrator | Friday 29 August 2025 20:51:00 +0000 (0:00:00.143) 0:00:40.998 ********* 2025-08-29 20:51:03.113853 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:51:03.113866 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:51:03.113889 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113901 | orchestrator | 2025-08-29 20:51:03.113914 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 20:51:03.113926 | orchestrator | Friday 29 August 2025 20:51:00 +0000 (0:00:00.142) 0:00:41.141 ********* 2025-08-29 20:51:03.113939 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:51:03.113952 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:51:03.113964 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.113976 | orchestrator | 2025-08-29 20:51:03.113990 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 20:51:03.114003 | orchestrator | Friday 29 August 2025 20:51:00 +0000 (0:00:00.133) 0:00:41.274 ********* 2025-08-29 20:51:03.114070 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:51:03.114087 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:51:03.114099 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.114111 | orchestrator | 2025-08-29 20:51:03.114122 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 20:51:03.114151 | orchestrator | Friday 29 August 2025 20:51:00 +0000 (0:00:00.273) 0:00:41.547 ********* 2025-08-29 20:51:03.114162 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:51:03.114173 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:51:03.114184 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.114195 | orchestrator | 2025-08-29 20:51:03.114206 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 20:51:03.114217 | orchestrator | Friday 29 August 2025 20:51:01 +0000 (0:00:00.153) 0:00:41.701 ********* 2025-08-29 20:51:03.114228 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:51:03.114239 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:51:03.114250 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.114261 | orchestrator | 2025-08-29 20:51:03.114272 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 20:51:03.114283 | orchestrator | Friday 29 August 2025 20:51:01 +0000 (0:00:00.145) 0:00:41.846 ********* 2025-08-29 20:51:03.114294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:51:03.114305 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:51:03.114316 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.114327 | orchestrator | 2025-08-29 20:51:03.114338 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 20:51:03.114349 | orchestrator | Friday 29 August 2025 20:51:01 +0000 (0:00:00.156) 0:00:42.003 ********* 2025-08-29 20:51:03.114360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:51:03.114371 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:51:03.114389 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.114400 | orchestrator | 2025-08-29 20:51:03.114411 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 20:51:03.114427 | orchestrator | Friday 29 August 2025 20:51:01 +0000 (0:00:00.147) 0:00:42.150 ********* 2025-08-29 20:51:03.114438 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:51:03.114449 | orchestrator | 2025-08-29 20:51:03.114460 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 20:51:03.114471 | orchestrator | Friday 29 August 2025 20:51:02 +0000 (0:00:00.522) 0:00:42.673 ********* 2025-08-29 20:51:03.114482 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:51:03.114492 | orchestrator | 2025-08-29 20:51:03.114503 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 20:51:03.114514 | orchestrator | Friday 29 August 2025 20:51:02 +0000 (0:00:00.473) 0:00:43.146 ********* 2025-08-29 20:51:03.114525 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:51:03.114536 | orchestrator | 2025-08-29 20:51:03.114547 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 20:51:03.114558 | orchestrator | Friday 29 August 2025 20:51:02 +0000 (0:00:00.115) 0:00:43.261 ********* 2025-08-29 20:51:03.114569 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'vg_name': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'}) 2025-08-29 20:51:03.114580 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'vg_name': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'}) 2025-08-29 20:51:03.114591 | orchestrator | 2025-08-29 20:51:03.114602 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 20:51:03.114613 | orchestrator | Friday 29 August 2025 20:51:02 +0000 (0:00:00.150) 0:00:43.412 ********* 2025-08-29 20:51:03.114624 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:51:03.114635 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:51:03.114646 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:03.114656 | orchestrator | 2025-08-29 20:51:03.114667 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 20:51:03.114678 | orchestrator | Friday 29 August 2025 20:51:02 +0000 (0:00:00.135) 0:00:43.548 ********* 2025-08-29 20:51:03.114689 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:51:03.114700 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:51:03.114718 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:08.370422 | orchestrator | 2025-08-29 20:51:08.370485 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 20:51:08.370494 | orchestrator | Friday 29 August 2025 20:51:03 +0000 (0:00:00.143) 0:00:43.691 ********* 2025-08-29 20:51:08.370501 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'})  2025-08-29 20:51:08.370508 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'})  2025-08-29 20:51:08.370514 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:08.370521 | orchestrator | 2025-08-29 20:51:08.370527 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 20:51:08.370533 | orchestrator | Friday 29 August 2025 20:51:03 +0000 (0:00:00.127) 0:00:43.819 ********* 2025-08-29 20:51:08.370551 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 20:51:08.370558 | orchestrator |  "lvm_report": { 2025-08-29 20:51:08.370564 | orchestrator |  "lv": [ 2025-08-29 20:51:08.370570 | orchestrator |  { 2025-08-29 20:51:08.370576 | orchestrator |  "lv_name": "osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6", 2025-08-29 20:51:08.370583 | orchestrator |  "vg_name": "ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6" 2025-08-29 20:51:08.370589 | orchestrator |  }, 2025-08-29 20:51:08.370595 | orchestrator |  { 2025-08-29 20:51:08.370600 | orchestrator |  "lv_name": "osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99", 2025-08-29 20:51:08.370606 | orchestrator |  "vg_name": "ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99" 2025-08-29 20:51:08.370612 | orchestrator |  } 2025-08-29 20:51:08.370617 | orchestrator |  ], 2025-08-29 20:51:08.370623 | orchestrator |  "pv": [ 2025-08-29 20:51:08.370628 | orchestrator |  { 2025-08-29 20:51:08.370634 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 20:51:08.370639 | orchestrator |  "vg_name": "ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6" 2025-08-29 20:51:08.370645 | orchestrator |  }, 2025-08-29 20:51:08.370650 | orchestrator |  { 2025-08-29 20:51:08.370656 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 20:51:08.370662 | orchestrator |  "vg_name": "ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99" 2025-08-29 20:51:08.370667 | orchestrator |  } 2025-08-29 20:51:08.370672 | orchestrator |  ] 2025-08-29 20:51:08.370678 | orchestrator |  } 2025-08-29 20:51:08.370684 | orchestrator | } 2025-08-29 20:51:08.370689 | orchestrator | 2025-08-29 20:51:08.370695 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 20:51:08.370700 | orchestrator | 2025-08-29 20:51:08.370706 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 20:51:08.370711 | orchestrator | Friday 29 August 2025 20:51:03 +0000 (0:00:00.389) 0:00:44.209 ********* 2025-08-29 20:51:08.370717 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 20:51:08.370723 | orchestrator | 2025-08-29 20:51:08.370728 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 20:51:08.370733 | orchestrator | Friday 29 August 2025 20:51:03 +0000 (0:00:00.230) 0:00:44.439 ********* 2025-08-29 20:51:08.370739 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:51:08.370745 | orchestrator | 2025-08-29 20:51:08.370751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.370756 | orchestrator | Friday 29 August 2025 20:51:04 +0000 (0:00:00.220) 0:00:44.660 ********* 2025-08-29 20:51:08.370762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 20:51:08.370767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 20:51:08.370773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 20:51:08.370778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 20:51:08.370783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 20:51:08.370789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 20:51:08.370794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 20:51:08.370800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 20:51:08.370805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 20:51:08.370842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 20:51:08.370849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 20:51:08.370860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 20:51:08.370866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 20:51:08.370872 | orchestrator | 2025-08-29 20:51:08.370879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.370885 | orchestrator | Friday 29 August 2025 20:51:04 +0000 (0:00:00.363) 0:00:45.023 ********* 2025-08-29 20:51:08.370891 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:08.370897 | orchestrator | 2025-08-29 20:51:08.370905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.370912 | orchestrator | Friday 29 August 2025 20:51:04 +0000 (0:00:00.184) 0:00:45.208 ********* 2025-08-29 20:51:08.370917 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:08.370923 | orchestrator | 2025-08-29 20:51:08.370929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.370945 | orchestrator | Friday 29 August 2025 20:51:04 +0000 (0:00:00.178) 0:00:45.386 ********* 2025-08-29 20:51:08.370951 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:08.370956 | orchestrator | 2025-08-29 20:51:08.370962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.370967 | orchestrator | Friday 29 August 2025 20:51:04 +0000 (0:00:00.182) 0:00:45.569 ********* 2025-08-29 20:51:08.370973 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:08.370978 | orchestrator | 2025-08-29 20:51:08.370983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.370989 | orchestrator | Friday 29 August 2025 20:51:05 +0000 (0:00:00.180) 0:00:45.750 ********* 2025-08-29 20:51:08.371022 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:08.371029 | orchestrator | 2025-08-29 20:51:08.371035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.371041 | orchestrator | Friday 29 August 2025 20:51:05 +0000 (0:00:00.185) 0:00:45.936 ********* 2025-08-29 20:51:08.371047 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:08.371053 | orchestrator | 2025-08-29 20:51:08.371059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.371065 | orchestrator | Friday 29 August 2025 20:51:05 +0000 (0:00:00.440) 0:00:46.376 ********* 2025-08-29 20:51:08.371071 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:08.371076 | orchestrator | 2025-08-29 20:51:08.371082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.371088 | orchestrator | Friday 29 August 2025 20:51:05 +0000 (0:00:00.204) 0:00:46.580 ********* 2025-08-29 20:51:08.371094 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:08.371100 | orchestrator | 2025-08-29 20:51:08.371106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.371112 | orchestrator | Friday 29 August 2025 20:51:06 +0000 (0:00:00.166) 0:00:46.747 ********* 2025-08-29 20:51:08.371118 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38) 2025-08-29 20:51:08.371125 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38) 2025-08-29 20:51:08.371131 | orchestrator | 2025-08-29 20:51:08.371137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.371143 | orchestrator | Friday 29 August 2025 20:51:06 +0000 (0:00:00.386) 0:00:47.134 ********* 2025-08-29 20:51:08.371149 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88) 2025-08-29 20:51:08.371155 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88) 2025-08-29 20:51:08.371161 | orchestrator | 2025-08-29 20:51:08.371167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.371173 | orchestrator | Friday 29 August 2025 20:51:06 +0000 (0:00:00.370) 0:00:47.504 ********* 2025-08-29 20:51:08.371181 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59) 2025-08-29 20:51:08.371193 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59) 2025-08-29 20:51:08.371199 | orchestrator | 2025-08-29 20:51:08.371205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.371210 | orchestrator | Friday 29 August 2025 20:51:07 +0000 (0:00:00.384) 0:00:47.889 ********* 2025-08-29 20:51:08.371216 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe) 2025-08-29 20:51:08.371222 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe) 2025-08-29 20:51:08.371228 | orchestrator | 2025-08-29 20:51:08.371235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 20:51:08.371241 | orchestrator | Friday 29 August 2025 20:51:07 +0000 (0:00:00.370) 0:00:48.260 ********* 2025-08-29 20:51:08.371247 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 20:51:08.371252 | orchestrator | 2025-08-29 20:51:08.371258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:08.371264 | orchestrator | Friday 29 August 2025 20:51:07 +0000 (0:00:00.312) 0:00:48.573 ********* 2025-08-29 20:51:08.371271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 20:51:08.371278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 20:51:08.371284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 20:51:08.371290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 20:51:08.371296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 20:51:08.371303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 20:51:08.371309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 20:51:08.371316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 20:51:08.371322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 20:51:08.371329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 20:51:08.371335 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 20:51:08.371346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 20:51:16.561406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 20:51:16.561521 | orchestrator | 2025-08-29 20:51:16.561551 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.561572 | orchestrator | Friday 29 August 2025 20:51:08 +0000 (0:00:00.374) 0:00:48.947 ********* 2025-08-29 20:51:16.561590 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.561609 | orchestrator | 2025-08-29 20:51:16.561628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.561646 | orchestrator | Friday 29 August 2025 20:51:08 +0000 (0:00:00.163) 0:00:49.110 ********* 2025-08-29 20:51:16.561664 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.561681 | orchestrator | 2025-08-29 20:51:16.561699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.561717 | orchestrator | Friday 29 August 2025 20:51:08 +0000 (0:00:00.178) 0:00:49.289 ********* 2025-08-29 20:51:16.561735 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.561753 | orchestrator | 2025-08-29 20:51:16.561772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.561791 | orchestrator | Friday 29 August 2025 20:51:09 +0000 (0:00:00.454) 0:00:49.743 ********* 2025-08-29 20:51:16.561917 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.561943 | orchestrator | 2025-08-29 20:51:16.561965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.561984 | orchestrator | Friday 29 August 2025 20:51:09 +0000 (0:00:00.181) 0:00:49.924 ********* 2025-08-29 20:51:16.562006 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.562146 | orchestrator | 2025-08-29 20:51:16.562172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.562192 | orchestrator | Friday 29 August 2025 20:51:09 +0000 (0:00:00.179) 0:00:50.104 ********* 2025-08-29 20:51:16.562211 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.562232 | orchestrator | 2025-08-29 20:51:16.562252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.562271 | orchestrator | Friday 29 August 2025 20:51:09 +0000 (0:00:00.182) 0:00:50.287 ********* 2025-08-29 20:51:16.562290 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.562310 | orchestrator | 2025-08-29 20:51:16.562329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.562347 | orchestrator | Friday 29 August 2025 20:51:09 +0000 (0:00:00.198) 0:00:50.485 ********* 2025-08-29 20:51:16.562367 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.562386 | orchestrator | 2025-08-29 20:51:16.562403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.562422 | orchestrator | Friday 29 August 2025 20:51:10 +0000 (0:00:00.185) 0:00:50.670 ********* 2025-08-29 20:51:16.562441 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 20:51:16.562461 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 20:51:16.562479 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 20:51:16.562514 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 20:51:16.562534 | orchestrator | 2025-08-29 20:51:16.562554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.562572 | orchestrator | Friday 29 August 2025 20:51:10 +0000 (0:00:00.583) 0:00:51.254 ********* 2025-08-29 20:51:16.562590 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.562608 | orchestrator | 2025-08-29 20:51:16.562627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.562644 | orchestrator | Friday 29 August 2025 20:51:10 +0000 (0:00:00.196) 0:00:51.451 ********* 2025-08-29 20:51:16.562661 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.562679 | orchestrator | 2025-08-29 20:51:16.562698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.562717 | orchestrator | Friday 29 August 2025 20:51:11 +0000 (0:00:00.189) 0:00:51.640 ********* 2025-08-29 20:51:16.562737 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.562753 | orchestrator | 2025-08-29 20:51:16.562771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 20:51:16.562788 | orchestrator | Friday 29 August 2025 20:51:11 +0000 (0:00:00.195) 0:00:51.835 ********* 2025-08-29 20:51:16.562806 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.562852 | orchestrator | 2025-08-29 20:51:16.562871 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 20:51:16.562888 | orchestrator | Friday 29 August 2025 20:51:11 +0000 (0:00:00.201) 0:00:52.037 ********* 2025-08-29 20:51:16.562907 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.562924 | orchestrator | 2025-08-29 20:51:16.562943 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 20:51:16.562963 | orchestrator | Friday 29 August 2025 20:51:11 +0000 (0:00:00.313) 0:00:52.350 ********* 2025-08-29 20:51:16.562981 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '275f26f1-4e1c-5372-9190-a1521a972d04'}}) 2025-08-29 20:51:16.562996 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c5db720f-fb16-50b5-adff-95cbe6288183'}}) 2025-08-29 20:51:16.563034 | orchestrator | 2025-08-29 20:51:16.563054 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 20:51:16.563074 | orchestrator | Friday 29 August 2025 20:51:11 +0000 (0:00:00.180) 0:00:52.531 ********* 2025-08-29 20:51:16.563093 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'}) 2025-08-29 20:51:16.563114 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'}) 2025-08-29 20:51:16.563133 | orchestrator | 2025-08-29 20:51:16.563153 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 20:51:16.563199 | orchestrator | Friday 29 August 2025 20:51:13 +0000 (0:00:01.852) 0:00:54.383 ********* 2025-08-29 20:51:16.563221 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:16.563240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:16.563259 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.563278 | orchestrator | 2025-08-29 20:51:16.563298 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 20:51:16.563316 | orchestrator | Friday 29 August 2025 20:51:13 +0000 (0:00:00.153) 0:00:54.537 ********* 2025-08-29 20:51:16.563335 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'}) 2025-08-29 20:51:16.563375 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'}) 2025-08-29 20:51:16.563394 | orchestrator | 2025-08-29 20:51:16.563414 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 20:51:16.563434 | orchestrator | Friday 29 August 2025 20:51:15 +0000 (0:00:01.297) 0:00:55.834 ********* 2025-08-29 20:51:16.563450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:16.563468 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:16.563486 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.563506 | orchestrator | 2025-08-29 20:51:16.563524 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 20:51:16.563543 | orchestrator | Friday 29 August 2025 20:51:15 +0000 (0:00:00.135) 0:00:55.970 ********* 2025-08-29 20:51:16.563561 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.563579 | orchestrator | 2025-08-29 20:51:16.563597 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 20:51:16.563614 | orchestrator | Friday 29 August 2025 20:51:15 +0000 (0:00:00.131) 0:00:56.101 ********* 2025-08-29 20:51:16.563633 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:16.563663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:16.563682 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.563701 | orchestrator | 2025-08-29 20:51:16.563719 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 20:51:16.563740 | orchestrator | Friday 29 August 2025 20:51:15 +0000 (0:00:00.135) 0:00:56.236 ********* 2025-08-29 20:51:16.563758 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.563776 | orchestrator | 2025-08-29 20:51:16.563794 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 20:51:16.563863 | orchestrator | Friday 29 August 2025 20:51:15 +0000 (0:00:00.107) 0:00:56.344 ********* 2025-08-29 20:51:16.563883 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:16.563901 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:16.563920 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.563937 | orchestrator | 2025-08-29 20:51:16.563956 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 20:51:16.563974 | orchestrator | Friday 29 August 2025 20:51:15 +0000 (0:00:00.135) 0:00:56.480 ********* 2025-08-29 20:51:16.563994 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.564011 | orchestrator | 2025-08-29 20:51:16.564029 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 20:51:16.564047 | orchestrator | Friday 29 August 2025 20:51:16 +0000 (0:00:00.124) 0:00:56.604 ********* 2025-08-29 20:51:16.564067 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:16.564086 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:16.564102 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:16.564119 | orchestrator | 2025-08-29 20:51:16.564136 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 20:51:16.564152 | orchestrator | Friday 29 August 2025 20:51:16 +0000 (0:00:00.139) 0:00:56.744 ********* 2025-08-29 20:51:16.564170 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:51:16.564186 | orchestrator | 2025-08-29 20:51:16.564202 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 20:51:16.564212 | orchestrator | Friday 29 August 2025 20:51:16 +0000 (0:00:00.114) 0:00:56.858 ********* 2025-08-29 20:51:16.564234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:22.272880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:22.272993 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.273010 | orchestrator | 2025-08-29 20:51:22.273023 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 20:51:22.273035 | orchestrator | Friday 29 August 2025 20:51:16 +0000 (0:00:00.282) 0:00:57.141 ********* 2025-08-29 20:51:22.273047 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:22.273058 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:22.273069 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.273080 | orchestrator | 2025-08-29 20:51:22.273091 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 20:51:22.273103 | orchestrator | Friday 29 August 2025 20:51:16 +0000 (0:00:00.138) 0:00:57.280 ********* 2025-08-29 20:51:22.273114 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:22.273125 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:22.273136 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.273147 | orchestrator | 2025-08-29 20:51:22.273181 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 20:51:22.273192 | orchestrator | Friday 29 August 2025 20:51:16 +0000 (0:00:00.134) 0:00:57.415 ********* 2025-08-29 20:51:22.273203 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.273214 | orchestrator | 2025-08-29 20:51:22.273225 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 20:51:22.273235 | orchestrator | Friday 29 August 2025 20:51:16 +0000 (0:00:00.122) 0:00:57.537 ********* 2025-08-29 20:51:22.273246 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.273257 | orchestrator | 2025-08-29 20:51:22.273268 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 20:51:22.273278 | orchestrator | Friday 29 August 2025 20:51:17 +0000 (0:00:00.119) 0:00:57.656 ********* 2025-08-29 20:51:22.273289 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.273300 | orchestrator | 2025-08-29 20:51:22.273311 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 20:51:22.273322 | orchestrator | Friday 29 August 2025 20:51:17 +0000 (0:00:00.122) 0:00:57.778 ********* 2025-08-29 20:51:22.273333 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 20:51:22.273345 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 20:51:22.273358 | orchestrator | } 2025-08-29 20:51:22.273371 | orchestrator | 2025-08-29 20:51:22.273383 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 20:51:22.273395 | orchestrator | Friday 29 August 2025 20:51:17 +0000 (0:00:00.129) 0:00:57.907 ********* 2025-08-29 20:51:22.273407 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 20:51:22.273420 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 20:51:22.273432 | orchestrator | } 2025-08-29 20:51:22.273444 | orchestrator | 2025-08-29 20:51:22.273456 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 20:51:22.273469 | orchestrator | Friday 29 August 2025 20:51:17 +0000 (0:00:00.121) 0:00:58.029 ********* 2025-08-29 20:51:22.273482 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 20:51:22.273495 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 20:51:22.273508 | orchestrator | } 2025-08-29 20:51:22.273520 | orchestrator | 2025-08-29 20:51:22.273532 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 20:51:22.273544 | orchestrator | Friday 29 August 2025 20:51:17 +0000 (0:00:00.133) 0:00:58.162 ********* 2025-08-29 20:51:22.273556 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:51:22.273568 | orchestrator | 2025-08-29 20:51:22.273580 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 20:51:22.273592 | orchestrator | Friday 29 August 2025 20:51:18 +0000 (0:00:00.501) 0:00:58.664 ********* 2025-08-29 20:51:22.273604 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:51:22.273616 | orchestrator | 2025-08-29 20:51:22.273628 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 20:51:22.273640 | orchestrator | Friday 29 August 2025 20:51:18 +0000 (0:00:00.482) 0:00:59.147 ********* 2025-08-29 20:51:22.273653 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:51:22.273665 | orchestrator | 2025-08-29 20:51:22.273677 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 20:51:22.273690 | orchestrator | Friday 29 August 2025 20:51:19 +0000 (0:00:00.519) 0:00:59.666 ********* 2025-08-29 20:51:22.273702 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:51:22.273713 | orchestrator | 2025-08-29 20:51:22.273724 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 20:51:22.273735 | orchestrator | Friday 29 August 2025 20:51:19 +0000 (0:00:00.297) 0:00:59.964 ********* 2025-08-29 20:51:22.273746 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.273757 | orchestrator | 2025-08-29 20:51:22.273768 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 20:51:22.273779 | orchestrator | Friday 29 August 2025 20:51:19 +0000 (0:00:00.112) 0:01:00.076 ********* 2025-08-29 20:51:22.273789 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.273807 | orchestrator | 2025-08-29 20:51:22.273839 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 20:51:22.273867 | orchestrator | Friday 29 August 2025 20:51:19 +0000 (0:00:00.108) 0:01:00.184 ********* 2025-08-29 20:51:22.273879 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 20:51:22.273890 | orchestrator |  "vgs_report": { 2025-08-29 20:51:22.273903 | orchestrator |  "vg": [] 2025-08-29 20:51:22.273931 | orchestrator |  } 2025-08-29 20:51:22.273944 | orchestrator | } 2025-08-29 20:51:22.273955 | orchestrator | 2025-08-29 20:51:22.273965 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 20:51:22.273976 | orchestrator | Friday 29 August 2025 20:51:19 +0000 (0:00:00.141) 0:01:00.326 ********* 2025-08-29 20:51:22.273987 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.273998 | orchestrator | 2025-08-29 20:51:22.274009 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 20:51:22.274076 | orchestrator | Friday 29 August 2025 20:51:19 +0000 (0:00:00.180) 0:01:00.507 ********* 2025-08-29 20:51:22.274087 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274098 | orchestrator | 2025-08-29 20:51:22.274109 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 20:51:22.274120 | orchestrator | Friday 29 August 2025 20:51:20 +0000 (0:00:00.139) 0:01:00.646 ********* 2025-08-29 20:51:22.274131 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274141 | orchestrator | 2025-08-29 20:51:22.274152 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 20:51:22.274163 | orchestrator | Friday 29 August 2025 20:51:20 +0000 (0:00:00.136) 0:01:00.783 ********* 2025-08-29 20:51:22.274174 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274185 | orchestrator | 2025-08-29 20:51:22.274195 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 20:51:22.274206 | orchestrator | Friday 29 August 2025 20:51:20 +0000 (0:00:00.138) 0:01:00.921 ********* 2025-08-29 20:51:22.274217 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274228 | orchestrator | 2025-08-29 20:51:22.274238 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 20:51:22.274249 | orchestrator | Friday 29 August 2025 20:51:20 +0000 (0:00:00.119) 0:01:01.041 ********* 2025-08-29 20:51:22.274260 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274270 | orchestrator | 2025-08-29 20:51:22.274281 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 20:51:22.274292 | orchestrator | Friday 29 August 2025 20:51:20 +0000 (0:00:00.128) 0:01:01.169 ********* 2025-08-29 20:51:22.274303 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274313 | orchestrator | 2025-08-29 20:51:22.274324 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 20:51:22.274335 | orchestrator | Friday 29 August 2025 20:51:20 +0000 (0:00:00.121) 0:01:01.291 ********* 2025-08-29 20:51:22.274345 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274356 | orchestrator | 2025-08-29 20:51:22.274367 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 20:51:22.274378 | orchestrator | Friday 29 August 2025 20:51:20 +0000 (0:00:00.138) 0:01:01.429 ********* 2025-08-29 20:51:22.274388 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274399 | orchestrator | 2025-08-29 20:51:22.274410 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 20:51:22.274420 | orchestrator | Friday 29 August 2025 20:51:21 +0000 (0:00:00.304) 0:01:01.733 ********* 2025-08-29 20:51:22.274437 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274448 | orchestrator | 2025-08-29 20:51:22.274459 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 20:51:22.274469 | orchestrator | Friday 29 August 2025 20:51:21 +0000 (0:00:00.139) 0:01:01.873 ********* 2025-08-29 20:51:22.274480 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274491 | orchestrator | 2025-08-29 20:51:22.274501 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 20:51:22.274520 | orchestrator | Friday 29 August 2025 20:51:21 +0000 (0:00:00.128) 0:01:02.002 ********* 2025-08-29 20:51:22.274531 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274542 | orchestrator | 2025-08-29 20:51:22.274553 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 20:51:22.274564 | orchestrator | Friday 29 August 2025 20:51:21 +0000 (0:00:00.134) 0:01:02.137 ********* 2025-08-29 20:51:22.274574 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274585 | orchestrator | 2025-08-29 20:51:22.274596 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 20:51:22.274606 | orchestrator | Friday 29 August 2025 20:51:21 +0000 (0:00:00.127) 0:01:02.264 ********* 2025-08-29 20:51:22.274617 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274628 | orchestrator | 2025-08-29 20:51:22.274638 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 20:51:22.274649 | orchestrator | Friday 29 August 2025 20:51:21 +0000 (0:00:00.132) 0:01:02.397 ********* 2025-08-29 20:51:22.274660 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:22.274671 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:22.274682 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274693 | orchestrator | 2025-08-29 20:51:22.274703 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 20:51:22.274714 | orchestrator | Friday 29 August 2025 20:51:21 +0000 (0:00:00.151) 0:01:02.549 ********* 2025-08-29 20:51:22.274725 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:22.274736 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:22.274747 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:22.274757 | orchestrator | 2025-08-29 20:51:22.274768 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 20:51:22.274779 | orchestrator | Friday 29 August 2025 20:51:22 +0000 (0:00:00.145) 0:01:02.694 ********* 2025-08-29 20:51:22.274797 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:25.163274 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:25.163365 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:25.163381 | orchestrator | 2025-08-29 20:51:25.163393 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 20:51:25.163405 | orchestrator | Friday 29 August 2025 20:51:22 +0000 (0:00:00.157) 0:01:02.851 ********* 2025-08-29 20:51:25.163416 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:25.163427 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:25.163438 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:25.163448 | orchestrator | 2025-08-29 20:51:25.163459 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 20:51:25.163470 | orchestrator | Friday 29 August 2025 20:51:22 +0000 (0:00:00.170) 0:01:03.022 ********* 2025-08-29 20:51:25.163481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:25.163517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:25.163529 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:25.163540 | orchestrator | 2025-08-29 20:51:25.163550 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 20:51:25.163561 | orchestrator | Friday 29 August 2025 20:51:22 +0000 (0:00:00.167) 0:01:03.190 ********* 2025-08-29 20:51:25.163572 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:25.163583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:25.163594 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:25.163605 | orchestrator | 2025-08-29 20:51:25.163615 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 20:51:25.163638 | orchestrator | Friday 29 August 2025 20:51:22 +0000 (0:00:00.145) 0:01:03.335 ********* 2025-08-29 20:51:25.163649 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:25.163660 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:25.163671 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:25.163682 | orchestrator | 2025-08-29 20:51:25.163693 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 20:51:25.163703 | orchestrator | Friday 29 August 2025 20:51:23 +0000 (0:00:00.325) 0:01:03.660 ********* 2025-08-29 20:51:25.163714 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:25.163725 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:25.163736 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:25.163747 | orchestrator | 2025-08-29 20:51:25.163758 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 20:51:25.163768 | orchestrator | Friday 29 August 2025 20:51:23 +0000 (0:00:00.167) 0:01:03.827 ********* 2025-08-29 20:51:25.163779 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:51:25.163790 | orchestrator | 2025-08-29 20:51:25.163801 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 20:51:25.163811 | orchestrator | Friday 29 August 2025 20:51:23 +0000 (0:00:00.512) 0:01:04.339 ********* 2025-08-29 20:51:25.163871 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:51:25.163884 | orchestrator | 2025-08-29 20:51:25.163897 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 20:51:25.163909 | orchestrator | Friday 29 August 2025 20:51:24 +0000 (0:00:00.532) 0:01:04.872 ********* 2025-08-29 20:51:25.163921 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:51:25.163933 | orchestrator | 2025-08-29 20:51:25.163944 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 20:51:25.163956 | orchestrator | Friday 29 August 2025 20:51:24 +0000 (0:00:00.153) 0:01:05.025 ********* 2025-08-29 20:51:25.163969 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'vg_name': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'}) 2025-08-29 20:51:25.163982 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'vg_name': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'}) 2025-08-29 20:51:25.163993 | orchestrator | 2025-08-29 20:51:25.164006 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 20:51:25.164027 | orchestrator | Friday 29 August 2025 20:51:24 +0000 (0:00:00.148) 0:01:05.174 ********* 2025-08-29 20:51:25.164056 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:25.164069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:25.164081 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:25.164093 | orchestrator | 2025-08-29 20:51:25.164105 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 20:51:25.164117 | orchestrator | Friday 29 August 2025 20:51:24 +0000 (0:00:00.155) 0:01:05.329 ********* 2025-08-29 20:51:25.164129 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:25.164142 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:25.164153 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:25.164166 | orchestrator | 2025-08-29 20:51:25.164179 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 20:51:25.164191 | orchestrator | Friday 29 August 2025 20:51:24 +0000 (0:00:00.145) 0:01:05.475 ********* 2025-08-29 20:51:25.164203 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'})  2025-08-29 20:51:25.164214 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'})  2025-08-29 20:51:25.164225 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:25.164235 | orchestrator | 2025-08-29 20:51:25.164246 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 20:51:25.164257 | orchestrator | Friday 29 August 2025 20:51:25 +0000 (0:00:00.129) 0:01:05.605 ********* 2025-08-29 20:51:25.164268 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 20:51:25.164279 | orchestrator |  "lvm_report": { 2025-08-29 20:51:25.164290 | orchestrator |  "lv": [ 2025-08-29 20:51:25.164301 | orchestrator |  { 2025-08-29 20:51:25.164313 | orchestrator |  "lv_name": "osd-block-275f26f1-4e1c-5372-9190-a1521a972d04", 2025-08-29 20:51:25.164324 | orchestrator |  "vg_name": "ceph-275f26f1-4e1c-5372-9190-a1521a972d04" 2025-08-29 20:51:25.164335 | orchestrator |  }, 2025-08-29 20:51:25.164350 | orchestrator |  { 2025-08-29 20:51:25.164361 | orchestrator |  "lv_name": "osd-block-c5db720f-fb16-50b5-adff-95cbe6288183", 2025-08-29 20:51:25.164372 | orchestrator |  "vg_name": "ceph-c5db720f-fb16-50b5-adff-95cbe6288183" 2025-08-29 20:51:25.164383 | orchestrator |  } 2025-08-29 20:51:25.164394 | orchestrator |  ], 2025-08-29 20:51:25.164404 | orchestrator |  "pv": [ 2025-08-29 20:51:25.164415 | orchestrator |  { 2025-08-29 20:51:25.164426 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 20:51:25.164437 | orchestrator |  "vg_name": "ceph-275f26f1-4e1c-5372-9190-a1521a972d04" 2025-08-29 20:51:25.164448 | orchestrator |  }, 2025-08-29 20:51:25.164458 | orchestrator |  { 2025-08-29 20:51:25.164469 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 20:51:25.164480 | orchestrator |  "vg_name": "ceph-c5db720f-fb16-50b5-adff-95cbe6288183" 2025-08-29 20:51:25.164491 | orchestrator |  } 2025-08-29 20:51:25.164502 | orchestrator |  ] 2025-08-29 20:51:25.164513 | orchestrator |  } 2025-08-29 20:51:25.164524 | orchestrator | } 2025-08-29 20:51:25.164535 | orchestrator | 2025-08-29 20:51:25.164546 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:51:25.164557 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 20:51:25.164574 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 20:51:25.164585 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 20:51:25.164596 | orchestrator | 2025-08-29 20:51:25.164606 | orchestrator | 2025-08-29 20:51:25.164617 | orchestrator | 2025-08-29 20:51:25.164628 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:51:25.164639 | orchestrator | Friday 29 August 2025 20:51:25 +0000 (0:00:00.119) 0:01:05.724 ********* 2025-08-29 20:51:25.164649 | orchestrator | =============================================================================== 2025-08-29 20:51:25.164660 | orchestrator | Create block VGs -------------------------------------------------------- 5.76s 2025-08-29 20:51:25.164671 | orchestrator | Create block LVs -------------------------------------------------------- 4.03s 2025-08-29 20:51:25.164682 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.72s 2025-08-29 20:51:25.164692 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2025-08-29 20:51:25.164703 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2025-08-29 20:51:25.164713 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.51s 2025-08-29 20:51:25.164724 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.46s 2025-08-29 20:51:25.164735 | orchestrator | Add known partitions to the list of available block devices ------------- 1.37s 2025-08-29 20:51:25.164752 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2025-08-29 20:51:25.395851 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-08-29 20:51:25.395928 | orchestrator | Print LVM report data --------------------------------------------------- 0.77s 2025-08-29 20:51:25.395940 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.75s 2025-08-29 20:51:25.395952 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-08-29 20:51:25.395962 | orchestrator | Get initial list of available block devices ----------------------------- 0.64s 2025-08-29 20:51:25.395973 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.63s 2025-08-29 20:51:25.395984 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-08-29 20:51:25.395994 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-08-29 20:51:25.396005 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.60s 2025-08-29 20:51:25.396016 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-08-29 20:51:25.396027 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.58s 2025-08-29 20:51:37.415348 | orchestrator | 2025-08-29 20:51:37 | INFO  | Task 1afb1178-9ab8-4028-b1a9-6fbd49310511 (facts) was prepared for execution. 2025-08-29 20:51:37.415465 | orchestrator | 2025-08-29 20:51:37 | INFO  | It takes a moment until task 1afb1178-9ab8-4028-b1a9-6fbd49310511 (facts) has been started and output is visible here. 2025-08-29 20:51:48.540326 | orchestrator | 2025-08-29 20:51:48.540470 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 20:51:48.540489 | orchestrator | 2025-08-29 20:51:48.540502 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 20:51:48.540515 | orchestrator | Friday 29 August 2025 20:51:40 +0000 (0:00:00.197) 0:00:00.197 ********* 2025-08-29 20:51:48.540580 | orchestrator | ok: [testbed-manager] 2025-08-29 20:51:48.540595 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:51:48.540606 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:51:48.540641 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:51:48.540652 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:51:48.540663 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:51:48.540674 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:51:48.540685 | orchestrator | 2025-08-29 20:51:48.540696 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 20:51:48.540707 | orchestrator | Friday 29 August 2025 20:51:41 +0000 (0:00:00.913) 0:00:01.111 ********* 2025-08-29 20:51:48.540723 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:51:48.540743 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:51:48.540761 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:51:48.540779 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:51:48.540797 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:51:48.540815 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:48.540881 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:48.540899 | orchestrator | 2025-08-29 20:51:48.540912 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 20:51:48.540926 | orchestrator | 2025-08-29 20:51:48.540945 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 20:51:48.540964 | orchestrator | Friday 29 August 2025 20:51:42 +0000 (0:00:01.046) 0:00:02.158 ********* 2025-08-29 20:51:48.540982 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:51:48.541001 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:51:48.541020 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:51:48.541038 | orchestrator | ok: [testbed-manager] 2025-08-29 20:51:48.541057 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:51:48.541077 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:51:48.541097 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:51:48.541116 | orchestrator | 2025-08-29 20:51:48.541135 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 20:51:48.541155 | orchestrator | 2025-08-29 20:51:48.541174 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 20:51:48.541194 | orchestrator | Friday 29 August 2025 20:51:47 +0000 (0:00:04.935) 0:00:07.093 ********* 2025-08-29 20:51:48.541209 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:51:48.541222 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:51:48.541235 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:51:48.541247 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:51:48.541258 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:51:48.541268 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:51:48.541278 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:51:48.541289 | orchestrator | 2025-08-29 20:51:48.541299 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:51:48.541311 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:51:48.541323 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:51:48.541333 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:51:48.541344 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:51:48.541354 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:51:48.541365 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:51:48.541375 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:51:48.541386 | orchestrator | 2025-08-29 20:51:48.541396 | orchestrator | 2025-08-29 20:51:48.541418 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:51:48.541429 | orchestrator | Friday 29 August 2025 20:51:48 +0000 (0:00:00.489) 0:00:07.583 ********* 2025-08-29 20:51:48.541440 | orchestrator | =============================================================================== 2025-08-29 20:51:48.541450 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.94s 2025-08-29 20:51:48.541461 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2025-08-29 20:51:48.541471 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.91s 2025-08-29 20:51:48.541482 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-08-29 20:52:00.687446 | orchestrator | 2025-08-29 20:52:00 | INFO  | Task f6257f4c-3a58-4dba-b3a3-393b0c825914 (frr) was prepared for execution. 2025-08-29 20:52:00.687537 | orchestrator | 2025-08-29 20:52:00 | INFO  | It takes a moment until task f6257f4c-3a58-4dba-b3a3-393b0c825914 (frr) has been started and output is visible here. 2025-08-29 20:52:25.417330 | orchestrator | 2025-08-29 20:52:25.417437 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-08-29 20:52:25.417452 | orchestrator | 2025-08-29 20:52:25.417463 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-08-29 20:52:25.417491 | orchestrator | Friday 29 August 2025 20:52:04 +0000 (0:00:00.242) 0:00:00.242 ********* 2025-08-29 20:52:25.417503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 20:52:25.417514 | orchestrator | 2025-08-29 20:52:25.417524 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-08-29 20:52:25.417534 | orchestrator | Friday 29 August 2025 20:52:04 +0000 (0:00:00.230) 0:00:00.473 ********* 2025-08-29 20:52:25.417544 | orchestrator | changed: [testbed-manager] 2025-08-29 20:52:25.417554 | orchestrator | 2025-08-29 20:52:25.417564 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-08-29 20:52:25.417574 | orchestrator | Friday 29 August 2025 20:52:05 +0000 (0:00:01.122) 0:00:01.595 ********* 2025-08-29 20:52:25.417584 | orchestrator | changed: [testbed-manager] 2025-08-29 20:52:25.417593 | orchestrator | 2025-08-29 20:52:25.417603 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-08-29 20:52:25.417619 | orchestrator | Friday 29 August 2025 20:52:15 +0000 (0:00:09.360) 0:00:10.956 ********* 2025-08-29 20:52:25.417629 | orchestrator | ok: [testbed-manager] 2025-08-29 20:52:25.417639 | orchestrator | 2025-08-29 20:52:25.417649 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-08-29 20:52:25.417658 | orchestrator | Friday 29 August 2025 20:52:16 +0000 (0:00:01.230) 0:00:12.186 ********* 2025-08-29 20:52:25.417668 | orchestrator | changed: [testbed-manager] 2025-08-29 20:52:25.417678 | orchestrator | 2025-08-29 20:52:25.417687 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-08-29 20:52:25.417697 | orchestrator | Friday 29 August 2025 20:52:17 +0000 (0:00:00.902) 0:00:13.089 ********* 2025-08-29 20:52:25.417706 | orchestrator | ok: [testbed-manager] 2025-08-29 20:52:25.417716 | orchestrator | 2025-08-29 20:52:25.417726 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-08-29 20:52:25.417736 | orchestrator | Friday 29 August 2025 20:52:18 +0000 (0:00:01.110) 0:00:14.200 ********* 2025-08-29 20:52:25.417745 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 20:52:25.417755 | orchestrator | 2025-08-29 20:52:25.417765 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-08-29 20:52:25.417774 | orchestrator | Friday 29 August 2025 20:52:19 +0000 (0:00:00.790) 0:00:14.990 ********* 2025-08-29 20:52:25.417784 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:52:25.417793 | orchestrator | 2025-08-29 20:52:25.417803 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-08-29 20:52:25.417813 | orchestrator | Friday 29 August 2025 20:52:19 +0000 (0:00:00.150) 0:00:15.141 ********* 2025-08-29 20:52:25.417870 | orchestrator | changed: [testbed-manager] 2025-08-29 20:52:25.417882 | orchestrator | 2025-08-29 20:52:25.417893 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-08-29 20:52:25.417904 | orchestrator | Friday 29 August 2025 20:52:20 +0000 (0:00:00.901) 0:00:16.042 ********* 2025-08-29 20:52:25.417915 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-08-29 20:52:25.417926 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-08-29 20:52:25.417938 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-08-29 20:52:25.417949 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-08-29 20:52:25.417959 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-08-29 20:52:25.417970 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-08-29 20:52:25.417981 | orchestrator | 2025-08-29 20:52:25.417992 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-08-29 20:52:25.418003 | orchestrator | Friday 29 August 2025 20:52:22 +0000 (0:00:02.119) 0:00:18.162 ********* 2025-08-29 20:52:25.418013 | orchestrator | ok: [testbed-manager] 2025-08-29 20:52:25.418084 | orchestrator | 2025-08-29 20:52:25.418096 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-08-29 20:52:25.418107 | orchestrator | Friday 29 August 2025 20:52:23 +0000 (0:00:01.271) 0:00:19.433 ********* 2025-08-29 20:52:25.418117 | orchestrator | changed: [testbed-manager] 2025-08-29 20:52:25.418127 | orchestrator | 2025-08-29 20:52:25.418138 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:52:25.418149 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 20:52:25.418161 | orchestrator | 2025-08-29 20:52:25.418171 | orchestrator | 2025-08-29 20:52:25.418182 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:52:25.418193 | orchestrator | Friday 29 August 2025 20:52:25 +0000 (0:00:01.409) 0:00:20.843 ********* 2025-08-29 20:52:25.418204 | orchestrator | =============================================================================== 2025-08-29 20:52:25.418215 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.36s 2025-08-29 20:52:25.418226 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.12s 2025-08-29 20:52:25.418235 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.41s 2025-08-29 20:52:25.418245 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.27s 2025-08-29 20:52:25.418271 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.23s 2025-08-29 20:52:25.418281 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.12s 2025-08-29 20:52:25.418291 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.11s 2025-08-29 20:52:25.418300 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.90s 2025-08-29 20:52:25.418310 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.90s 2025-08-29 20:52:25.418319 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.79s 2025-08-29 20:52:25.418329 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2025-08-29 20:52:25.418339 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.15s 2025-08-29 20:52:25.651998 | orchestrator | 2025-08-29 20:52:25.655534 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Aug 29 20:52:25 UTC 2025 2025-08-29 20:52:25.655565 | orchestrator | 2025-08-29 20:52:27.407798 | orchestrator | 2025-08-29 20:52:27 | INFO  | Collection nutshell is prepared for execution 2025-08-29 20:52:27.407968 | orchestrator | 2025-08-29 20:52:27 | INFO  | D [0] - dotfiles 2025-08-29 20:52:37.436641 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [0] - homer 2025-08-29 20:52:37.436754 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [0] - netdata 2025-08-29 20:52:37.436771 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [0] - openstackclient 2025-08-29 20:52:37.436783 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [0] - phpmyadmin 2025-08-29 20:52:37.436794 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [0] - common 2025-08-29 20:52:37.442391 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [1] -- loadbalancer 2025-08-29 20:52:37.442652 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [2] --- opensearch 2025-08-29 20:52:37.443048 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [2] --- mariadb-ng 2025-08-29 20:52:37.444204 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [3] ---- horizon 2025-08-29 20:52:37.444225 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [3] ---- keystone 2025-08-29 20:52:37.444643 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [4] ----- neutron 2025-08-29 20:52:37.445144 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [5] ------ wait-for-nova 2025-08-29 20:52:37.445562 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [5] ------ octavia 2025-08-29 20:52:37.447861 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [4] ----- barbican 2025-08-29 20:52:37.448029 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [4] ----- designate 2025-08-29 20:52:37.448543 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [4] ----- ironic 2025-08-29 20:52:37.448901 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [4] ----- placement 2025-08-29 20:52:37.449171 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [4] ----- magnum 2025-08-29 20:52:37.450320 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [1] -- openvswitch 2025-08-29 20:52:37.450709 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [2] --- ovn 2025-08-29 20:52:37.451007 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [1] -- memcached 2025-08-29 20:52:37.451343 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [1] -- redis 2025-08-29 20:52:37.451628 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [1] -- rabbitmq-ng 2025-08-29 20:52:37.452031 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [0] - kubernetes 2025-08-29 20:52:37.454466 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [1] -- kubeconfig 2025-08-29 20:52:37.454825 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [1] -- copy-kubeconfig 2025-08-29 20:52:37.454964 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [0] - ceph 2025-08-29 20:52:37.457110 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [1] -- ceph-pools 2025-08-29 20:52:37.457135 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [2] --- copy-ceph-keys 2025-08-29 20:52:37.457147 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [3] ---- cephclient 2025-08-29 20:52:37.457429 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-08-29 20:52:37.457451 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [4] ----- wait-for-keystone 2025-08-29 20:52:37.457702 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [5] ------ kolla-ceph-rgw 2025-08-29 20:52:37.457721 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [5] ------ glance 2025-08-29 20:52:37.457733 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [5] ------ cinder 2025-08-29 20:52:37.457945 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [5] ------ nova 2025-08-29 20:52:37.458183 | orchestrator | 2025-08-29 20:52:37 | INFO  | A [4] ----- prometheus 2025-08-29 20:52:37.458238 | orchestrator | 2025-08-29 20:52:37 | INFO  | D [5] ------ grafana 2025-08-29 20:52:37.648744 | orchestrator | 2025-08-29 20:52:37 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-08-29 20:52:37.648871 | orchestrator | 2025-08-29 20:52:37 | INFO  | Tasks are running in the background 2025-08-29 20:52:40.344562 | orchestrator | 2025-08-29 20:52:40 | INFO  | No task IDs specified, wait for all currently running tasks 2025-08-29 20:52:42.442754 | orchestrator | 2025-08-29 20:52:42 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:52:42.444710 | orchestrator | 2025-08-29 20:52:42 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:52:42.445183 | orchestrator | 2025-08-29 20:52:42 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:52:42.451811 | orchestrator | 2025-08-29 20:52:42 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:52:42.452395 | orchestrator | 2025-08-29 20:52:42 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:52:42.453101 | orchestrator | 2025-08-29 20:52:42 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:52:42.453680 | orchestrator | 2025-08-29 20:52:42 | INFO  | Task 1a2212c3-975c-4280-a8ec-15eec013c3d2 is in state STARTED 2025-08-29 20:52:42.453891 | orchestrator | 2025-08-29 20:52:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:52:45.493154 | orchestrator | 2025-08-29 20:52:45 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:52:45.493663 | orchestrator | 2025-08-29 20:52:45 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:52:45.497584 | orchestrator | 2025-08-29 20:52:45 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:52:45.499821 | orchestrator | 2025-08-29 20:52:45 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:52:45.500861 | orchestrator | 2025-08-29 20:52:45 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:52:45.506852 | orchestrator | 2025-08-29 20:52:45 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:52:45.513078 | orchestrator | 2025-08-29 20:52:45 | INFO  | Task 1a2212c3-975c-4280-a8ec-15eec013c3d2 is in state STARTED 2025-08-29 20:52:45.513116 | orchestrator | 2025-08-29 20:52:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:52:48.540227 | orchestrator | 2025-08-29 20:52:48 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:52:48.541777 | orchestrator | 2025-08-29 20:52:48 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:52:48.542454 | orchestrator | 2025-08-29 20:52:48 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:52:48.543644 | orchestrator | 2025-08-29 20:52:48 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:52:48.544230 | orchestrator | 2025-08-29 20:52:48 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:52:48.546115 | orchestrator | 2025-08-29 20:52:48 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:52:48.546699 | orchestrator | 2025-08-29 20:52:48 | INFO  | Task 1a2212c3-975c-4280-a8ec-15eec013c3d2 is in state STARTED 2025-08-29 20:52:48.546804 | orchestrator | 2025-08-29 20:52:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:52:52.103816 | orchestrator | 2025-08-29 20:52:52 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:52:52.103945 | orchestrator | 2025-08-29 20:52:52 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:52:52.103959 | orchestrator | 2025-08-29 20:52:52 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:52:52.103970 | orchestrator | 2025-08-29 20:52:52 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:52:52.103981 | orchestrator | 2025-08-29 20:52:52 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:52:52.103992 | orchestrator | 2025-08-29 20:52:52 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:52:52.104003 | orchestrator | 2025-08-29 20:52:52 | INFO  | Task 1a2212c3-975c-4280-a8ec-15eec013c3d2 is in state STARTED 2025-08-29 20:52:52.104014 | orchestrator | 2025-08-29 20:52:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:52:55.098420 | orchestrator | 2025-08-29 20:52:55 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:52:55.098884 | orchestrator | 2025-08-29 20:52:55 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:52:55.099449 | orchestrator | 2025-08-29 20:52:55 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:52:55.099860 | orchestrator | 2025-08-29 20:52:55 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:52:55.100610 | orchestrator | 2025-08-29 20:52:55 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:52:55.101176 | orchestrator | 2025-08-29 20:52:55 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:52:55.102177 | orchestrator | 2025-08-29 20:52:55 | INFO  | Task 1a2212c3-975c-4280-a8ec-15eec013c3d2 is in state STARTED 2025-08-29 20:52:55.102205 | orchestrator | 2025-08-29 20:52:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:52:58.150006 | orchestrator | 2025-08-29 20:52:58 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:52:58.151573 | orchestrator | 2025-08-29 20:52:58 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:52:58.152787 | orchestrator | 2025-08-29 20:52:58 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:52:58.154673 | orchestrator | 2025-08-29 20:52:58 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:52:58.155266 | orchestrator | 2025-08-29 20:52:58 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:52:58.156792 | orchestrator | 2025-08-29 20:52:58 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:52:58.158743 | orchestrator | 2025-08-29 20:52:58 | INFO  | Task 1a2212c3-975c-4280-a8ec-15eec013c3d2 is in state STARTED 2025-08-29 20:52:58.158767 | orchestrator | 2025-08-29 20:52:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:01.223712 | orchestrator | 2025-08-29 20:53:01 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:01.223801 | orchestrator | 2025-08-29 20:53:01 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:01.223815 | orchestrator | 2025-08-29 20:53:01 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:01.226345 | orchestrator | 2025-08-29 20:53:01 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:01.226894 | orchestrator | 2025-08-29 20:53:01 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:53:01.227493 | orchestrator | 2025-08-29 20:53:01 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:01.229033 | orchestrator | 2025-08-29 20:53:01 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:53:01.230073 | orchestrator | 2025-08-29 20:53:01 | INFO  | Task 1a2212c3-975c-4280-a8ec-15eec013c3d2 is in state SUCCESS 2025-08-29 20:53:01.230437 | orchestrator | 2025-08-29 20:53:01.230460 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-08-29 20:53:01.230472 | orchestrator | 2025-08-29 20:53:01.230483 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-08-29 20:53:01.230494 | orchestrator | Friday 29 August 2025 20:52:48 +0000 (0:00:00.803) 0:00:00.803 ********* 2025-08-29 20:53:01.230505 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:53:01.230516 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:53:01.230527 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:01.230538 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:53:01.230548 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:53:01.230559 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:53:01.230570 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:53:01.230580 | orchestrator | 2025-08-29 20:53:01.230591 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-08-29 20:53:01.230602 | orchestrator | Friday 29 August 2025 20:52:52 +0000 (0:00:03.971) 0:00:04.774 ********* 2025-08-29 20:53:01.230613 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 20:53:01.230624 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 20:53:01.230634 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 20:53:01.230645 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 20:53:01.230655 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 20:53:01.230666 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 20:53:01.230676 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 20:53:01.230687 | orchestrator | 2025-08-29 20:53:01.230698 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-08-29 20:53:01.230709 | orchestrator | Friday 29 August 2025 20:52:54 +0000 (0:00:01.309) 0:00:06.084 ********* 2025-08-29 20:53:01.230723 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 20:52:53.383433', 'end': '2025-08-29 20:52:53.392393', 'delta': '0:00:00.008960', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 20:53:01.230745 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 20:52:53.415963', 'end': '2025-08-29 20:52:53.424106', 'delta': '0:00:00.008143', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 20:53:01.230769 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 20:52:53.428652', 'end': '2025-08-29 20:52:53.438760', 'delta': '0:00:00.010108', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 20:53:01.230802 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 20:52:53.550870', 'end': '2025-08-29 20:52:53.558965', 'delta': '0:00:00.008095', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 20:53:01.230814 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 20:52:53.809924', 'end': '2025-08-29 20:52:53.818542', 'delta': '0:00:00.008618', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 20:53:01.231102 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 20:52:53.722492', 'end': '2025-08-29 20:52:53.731148', 'delta': '0:00:00.008656', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 20:53:01.231121 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 20:52:53.940102', 'end': '2025-08-29 20:52:53.952498', 'delta': '0:00:00.012396', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 20:53:01.231147 | orchestrator | 2025-08-29 20:53:01.231160 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-08-29 20:53:01.231174 | orchestrator | Friday 29 August 2025 20:52:55 +0000 (0:00:01.279) 0:00:07.363 ********* 2025-08-29 20:53:01.231186 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 20:53:01.231199 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 20:53:01.231212 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 20:53:01.231224 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 20:53:01.231236 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 20:53:01.231249 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 20:53:01.231260 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 20:53:01.231271 | orchestrator | 2025-08-29 20:53:01.231282 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-08-29 20:53:01.231293 | orchestrator | Friday 29 August 2025 20:52:57 +0000 (0:00:01.651) 0:00:09.014 ********* 2025-08-29 20:53:01.231304 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 20:53:01.231315 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-08-29 20:53:01.231326 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 20:53:01.231341 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 20:53:01.231352 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 20:53:01.231363 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 20:53:01.231374 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 20:53:01.231385 | orchestrator | 2025-08-29 20:53:01.231396 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:53:01.231416 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:01.231429 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:01.231440 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:01.231451 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:01.231462 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:01.231473 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:01.231484 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:01.231495 | orchestrator | 2025-08-29 20:53:01.231506 | orchestrator | 2025-08-29 20:53:01.231517 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:53:01.231528 | orchestrator | Friday 29 August 2025 20:52:59 +0000 (0:00:02.246) 0:00:11.261 ********* 2025-08-29 20:53:01.231538 | orchestrator | =============================================================================== 2025-08-29 20:53:01.231549 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.97s 2025-08-29 20:53:01.231560 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.25s 2025-08-29 20:53:01.231571 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.65s 2025-08-29 20:53:01.231589 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.31s 2025-08-29 20:53:01.231600 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.28s 2025-08-29 20:53:01.231611 | orchestrator | 2025-08-29 20:53:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:04.334800 | orchestrator | 2025-08-29 20:53:04 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:04.338362 | orchestrator | 2025-08-29 20:53:04 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:04.339858 | orchestrator | 2025-08-29 20:53:04 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:04.340767 | orchestrator | 2025-08-29 20:53:04 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:04.341280 | orchestrator | 2025-08-29 20:53:04 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:53:04.346429 | orchestrator | 2025-08-29 20:53:04 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:04.346940 | orchestrator | 2025-08-29 20:53:04 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:53:04.346961 | orchestrator | 2025-08-29 20:53:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:07.415255 | orchestrator | 2025-08-29 20:53:07 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:07.415623 | orchestrator | 2025-08-29 20:53:07 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:07.417019 | orchestrator | 2025-08-29 20:53:07 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:07.418652 | orchestrator | 2025-08-29 20:53:07 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:07.423873 | orchestrator | 2025-08-29 20:53:07 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:53:07.423903 | orchestrator | 2025-08-29 20:53:07 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:07.423915 | orchestrator | 2025-08-29 20:53:07 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:53:07.423927 | orchestrator | 2025-08-29 20:53:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:10.463990 | orchestrator | 2025-08-29 20:53:10 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:10.464091 | orchestrator | 2025-08-29 20:53:10 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:10.464106 | orchestrator | 2025-08-29 20:53:10 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:10.464118 | orchestrator | 2025-08-29 20:53:10 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:10.464129 | orchestrator | 2025-08-29 20:53:10 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:53:10.464272 | orchestrator | 2025-08-29 20:53:10 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:10.464822 | orchestrator | 2025-08-29 20:53:10 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:53:10.465046 | orchestrator | 2025-08-29 20:53:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:13.495300 | orchestrator | 2025-08-29 20:53:13 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:13.496808 | orchestrator | 2025-08-29 20:53:13 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:13.498465 | orchestrator | 2025-08-29 20:53:13 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:13.499995 | orchestrator | 2025-08-29 20:53:13 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:13.501419 | orchestrator | 2025-08-29 20:53:13 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:53:13.503034 | orchestrator | 2025-08-29 20:53:13 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:13.504933 | orchestrator | 2025-08-29 20:53:13 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:53:13.504959 | orchestrator | 2025-08-29 20:53:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:16.555666 | orchestrator | 2025-08-29 20:53:16 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:16.555758 | orchestrator | 2025-08-29 20:53:16 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:16.555774 | orchestrator | 2025-08-29 20:53:16 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:16.556740 | orchestrator | 2025-08-29 20:53:16 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:16.558889 | orchestrator | 2025-08-29 20:53:16 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:53:16.561085 | orchestrator | 2025-08-29 20:53:16 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:16.563679 | orchestrator | 2025-08-29 20:53:16 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:53:16.563706 | orchestrator | 2025-08-29 20:53:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:19.598393 | orchestrator | 2025-08-29 20:53:19 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:19.604761 | orchestrator | 2025-08-29 20:53:19 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:19.609007 | orchestrator | 2025-08-29 20:53:19 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:19.613782 | orchestrator | 2025-08-29 20:53:19 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:19.615697 | orchestrator | 2025-08-29 20:53:19 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:53:19.617599 | orchestrator | 2025-08-29 20:53:19 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:19.620025 | orchestrator | 2025-08-29 20:53:19 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:53:19.620047 | orchestrator | 2025-08-29 20:53:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:22.664931 | orchestrator | 2025-08-29 20:53:22 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:22.682470 | orchestrator | 2025-08-29 20:53:22 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:22.683588 | orchestrator | 2025-08-29 20:53:22 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:22.693752 | orchestrator | 2025-08-29 20:53:22 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:22.707876 | orchestrator | 2025-08-29 20:53:22 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state STARTED 2025-08-29 20:53:22.709172 | orchestrator | 2025-08-29 20:53:22 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:22.713525 | orchestrator | 2025-08-29 20:53:22 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:53:22.713562 | orchestrator | 2025-08-29 20:53:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:25.802394 | orchestrator | 2025-08-29 20:53:25 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:25.802486 | orchestrator | 2025-08-29 20:53:25 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:25.803174 | orchestrator | 2025-08-29 20:53:25 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:25.803817 | orchestrator | 2025-08-29 20:53:25 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:25.805745 | orchestrator | 2025-08-29 20:53:25 | INFO  | Task 985190f5-795e-48f4-82fd-af62e67271b4 is in state SUCCESS 2025-08-29 20:53:25.827246 | orchestrator | 2025-08-29 20:53:25 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:25.829566 | orchestrator | 2025-08-29 20:53:25 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:53:25.829590 | orchestrator | 2025-08-29 20:53:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:28.867501 | orchestrator | 2025-08-29 20:53:28 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:28.868021 | orchestrator | 2025-08-29 20:53:28 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:28.870481 | orchestrator | 2025-08-29 20:53:28 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:28.872912 | orchestrator | 2025-08-29 20:53:28 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:28.875877 | orchestrator | 2025-08-29 20:53:28 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:28.879364 | orchestrator | 2025-08-29 20:53:28 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:53:28.879613 | orchestrator | 2025-08-29 20:53:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:31.926013 | orchestrator | 2025-08-29 20:53:31 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:31.926148 | orchestrator | 2025-08-29 20:53:31 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:31.926164 | orchestrator | 2025-08-29 20:53:31 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:31.926431 | orchestrator | 2025-08-29 20:53:31 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:31.927165 | orchestrator | 2025-08-29 20:53:31 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:31.927668 | orchestrator | 2025-08-29 20:53:31 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state STARTED 2025-08-29 20:53:31.928130 | orchestrator | 2025-08-29 20:53:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:34.959923 | orchestrator | 2025-08-29 20:53:34 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:34.960013 | orchestrator | 2025-08-29 20:53:34 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:34.962541 | orchestrator | 2025-08-29 20:53:34 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:34.963273 | orchestrator | 2025-08-29 20:53:34 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:34.966509 | orchestrator | 2025-08-29 20:53:34 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:34.966534 | orchestrator | 2025-08-29 20:53:34 | INFO  | Task 3669abc5-884b-401d-8d27-8f641a8ba6ab is in state SUCCESS 2025-08-29 20:53:34.966546 | orchestrator | 2025-08-29 20:53:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:38.082006 | orchestrator | 2025-08-29 20:53:38 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:38.082161 | orchestrator | 2025-08-29 20:53:38 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:38.082192 | orchestrator | 2025-08-29 20:53:38 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:38.082349 | orchestrator | 2025-08-29 20:53:38 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:38.084025 | orchestrator | 2025-08-29 20:53:38 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:38.084103 | orchestrator | 2025-08-29 20:53:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:41.124967 | orchestrator | 2025-08-29 20:53:41 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:41.127715 | orchestrator | 2025-08-29 20:53:41 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:41.129427 | orchestrator | 2025-08-29 20:53:41 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:41.130648 | orchestrator | 2025-08-29 20:53:41 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:41.130678 | orchestrator | 2025-08-29 20:53:41 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:41.130690 | orchestrator | 2025-08-29 20:53:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:44.178464 | orchestrator | 2025-08-29 20:53:44 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:44.178689 | orchestrator | 2025-08-29 20:53:44 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:44.179256 | orchestrator | 2025-08-29 20:53:44 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:44.179938 | orchestrator | 2025-08-29 20:53:44 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:44.180689 | orchestrator | 2025-08-29 20:53:44 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:44.180902 | orchestrator | 2025-08-29 20:53:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:47.218226 | orchestrator | 2025-08-29 20:53:47 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:47.219385 | orchestrator | 2025-08-29 20:53:47 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:47.220513 | orchestrator | 2025-08-29 20:53:47 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:47.221354 | orchestrator | 2025-08-29 20:53:47 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:47.222672 | orchestrator | 2025-08-29 20:53:47 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:47.222722 | orchestrator | 2025-08-29 20:53:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:50.305249 | orchestrator | 2025-08-29 20:53:50 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:50.306144 | orchestrator | 2025-08-29 20:53:50 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:50.309189 | orchestrator | 2025-08-29 20:53:50 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:50.310306 | orchestrator | 2025-08-29 20:53:50 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:50.311555 | orchestrator | 2025-08-29 20:53:50 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:50.311584 | orchestrator | 2025-08-29 20:53:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:53.343767 | orchestrator | 2025-08-29 20:53:53 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state STARTED 2025-08-29 20:53:53.345410 | orchestrator | 2025-08-29 20:53:53 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state STARTED 2025-08-29 20:53:53.350311 | orchestrator | 2025-08-29 20:53:53 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:53.350347 | orchestrator | 2025-08-29 20:53:53 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:53.350971 | orchestrator | 2025-08-29 20:53:53 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:53.351187 | orchestrator | 2025-08-29 20:53:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:56.394763 | orchestrator | 2025-08-29 20:53:56.394875 | orchestrator | 2025-08-29 20:53:56.394891 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-08-29 20:53:56.394903 | orchestrator | 2025-08-29 20:53:56.394915 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-08-29 20:53:56.394934 | orchestrator | Friday 29 August 2025 20:52:48 +0000 (0:00:00.523) 0:00:00.523 ********* 2025-08-29 20:53:56.394946 | orchestrator | ok: [testbed-manager] => { 2025-08-29 20:53:56.394958 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-08-29 20:53:56.394970 | orchestrator | } 2025-08-29 20:53:56.394982 | orchestrator | 2025-08-29 20:53:56.394993 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-08-29 20:53:56.395059 | orchestrator | Friday 29 August 2025 20:52:48 +0000 (0:00:00.399) 0:00:00.923 ********* 2025-08-29 20:53:56.395099 | orchestrator | ok: [testbed-manager] 2025-08-29 20:53:56.395111 | orchestrator | 2025-08-29 20:53:56.395122 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-08-29 20:53:56.395133 | orchestrator | Friday 29 August 2025 20:52:49 +0000 (0:00:01.265) 0:00:02.188 ********* 2025-08-29 20:53:56.395144 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-08-29 20:53:56.395155 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-08-29 20:53:56.395166 | orchestrator | 2025-08-29 20:53:56.395177 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-08-29 20:53:56.395188 | orchestrator | Friday 29 August 2025 20:52:51 +0000 (0:00:01.727) 0:00:03.915 ********* 2025-08-29 20:53:56.395199 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.395209 | orchestrator | 2025-08-29 20:53:56.395220 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-08-29 20:53:56.395231 | orchestrator | Friday 29 August 2025 20:52:54 +0000 (0:00:02.748) 0:00:06.664 ********* 2025-08-29 20:53:56.395242 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.395252 | orchestrator | 2025-08-29 20:53:56.395263 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-08-29 20:53:56.395274 | orchestrator | Friday 29 August 2025 20:52:56 +0000 (0:00:02.127) 0:00:08.791 ********* 2025-08-29 20:53:56.395285 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-08-29 20:53:56.395313 | orchestrator | ok: [testbed-manager] 2025-08-29 20:53:56.395326 | orchestrator | 2025-08-29 20:53:56.395338 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-08-29 20:53:56.395350 | orchestrator | Friday 29 August 2025 20:53:22 +0000 (0:00:25.778) 0:00:34.570 ********* 2025-08-29 20:53:56.395363 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.395376 | orchestrator | 2025-08-29 20:53:56.395387 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:53:56.395400 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:56.395413 | orchestrator | 2025-08-29 20:53:56.395425 | orchestrator | 2025-08-29 20:53:56.395437 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:53:56.395449 | orchestrator | Friday 29 August 2025 20:53:25 +0000 (0:00:02.755) 0:00:37.326 ********* 2025-08-29 20:53:56.395461 | orchestrator | =============================================================================== 2025-08-29 20:53:56.395473 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.78s 2025-08-29 20:53:56.395485 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.76s 2025-08-29 20:53:56.395498 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.75s 2025-08-29 20:53:56.395510 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.13s 2025-08-29 20:53:56.395522 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.73s 2025-08-29 20:53:56.395534 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.27s 2025-08-29 20:53:56.395546 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.40s 2025-08-29 20:53:56.395558 | orchestrator | 2025-08-29 20:53:56.395570 | orchestrator | 2025-08-29 20:53:56.395581 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-08-29 20:53:56.395593 | orchestrator | 2025-08-29 20:53:56.395606 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-08-29 20:53:56.395619 | orchestrator | Friday 29 August 2025 20:52:49 +0000 (0:00:00.474) 0:00:00.474 ********* 2025-08-29 20:53:56.395632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-08-29 20:53:56.395645 | orchestrator | 2025-08-29 20:53:56.395657 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-08-29 20:53:56.395669 | orchestrator | Friday 29 August 2025 20:52:49 +0000 (0:00:00.342) 0:00:00.817 ********* 2025-08-29 20:53:56.395681 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-08-29 20:53:56.395692 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-08-29 20:53:56.395703 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-08-29 20:53:56.395714 | orchestrator | 2025-08-29 20:53:56.395725 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-08-29 20:53:56.395736 | orchestrator | Friday 29 August 2025 20:52:52 +0000 (0:00:02.589) 0:00:03.406 ********* 2025-08-29 20:53:56.395746 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.395757 | orchestrator | 2025-08-29 20:53:56.395768 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-08-29 20:53:56.395779 | orchestrator | Friday 29 August 2025 20:52:55 +0000 (0:00:02.768) 0:00:06.175 ********* 2025-08-29 20:53:56.395806 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-08-29 20:53:56.395856 | orchestrator | ok: [testbed-manager] 2025-08-29 20:53:56.395869 | orchestrator | 2025-08-29 20:53:56.395880 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-08-29 20:53:56.395891 | orchestrator | Friday 29 August 2025 20:53:27 +0000 (0:00:32.002) 0:00:38.178 ********* 2025-08-29 20:53:56.395901 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.395919 | orchestrator | 2025-08-29 20:53:56.395930 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-08-29 20:53:56.395941 | orchestrator | Friday 29 August 2025 20:53:28 +0000 (0:00:00.957) 0:00:39.136 ********* 2025-08-29 20:53:56.395952 | orchestrator | ok: [testbed-manager] 2025-08-29 20:53:56.395963 | orchestrator | 2025-08-29 20:53:56.395974 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-08-29 20:53:56.396013 | orchestrator | Friday 29 August 2025 20:53:29 +0000 (0:00:01.312) 0:00:40.448 ********* 2025-08-29 20:53:56.396026 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.396037 | orchestrator | 2025-08-29 20:53:56.396048 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-08-29 20:53:56.396059 | orchestrator | Friday 29 August 2025 20:53:31 +0000 (0:00:01.888) 0:00:42.337 ********* 2025-08-29 20:53:56.396070 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.396081 | orchestrator | 2025-08-29 20:53:56.396092 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-08-29 20:53:56.396103 | orchestrator | Friday 29 August 2025 20:53:32 +0000 (0:00:01.096) 0:00:43.433 ********* 2025-08-29 20:53:56.396113 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.396124 | orchestrator | 2025-08-29 20:53:56.396135 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-08-29 20:53:56.396146 | orchestrator | Friday 29 August 2025 20:53:32 +0000 (0:00:00.599) 0:00:44.033 ********* 2025-08-29 20:53:56.396157 | orchestrator | ok: [testbed-manager] 2025-08-29 20:53:56.396168 | orchestrator | 2025-08-29 20:53:56.396178 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:53:56.396189 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:56.396201 | orchestrator | 2025-08-29 20:53:56.396211 | orchestrator | 2025-08-29 20:53:56.396222 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:53:56.396233 | orchestrator | Friday 29 August 2025 20:53:33 +0000 (0:00:00.536) 0:00:44.569 ********* 2025-08-29 20:53:56.396244 | orchestrator | =============================================================================== 2025-08-29 20:53:56.396255 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.00s 2025-08-29 20:53:56.396265 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.77s 2025-08-29 20:53:56.396276 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.59s 2025-08-29 20:53:56.396287 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.89s 2025-08-29 20:53:56.396298 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.31s 2025-08-29 20:53:56.396309 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.10s 2025-08-29 20:53:56.396320 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.96s 2025-08-29 20:53:56.396331 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.60s 2025-08-29 20:53:56.396341 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.54s 2025-08-29 20:53:56.396352 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.34s 2025-08-29 20:53:56.396363 | orchestrator | 2025-08-29 20:53:56.396374 | orchestrator | 2025-08-29 20:53:56.396385 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 20:53:56.396396 | orchestrator | 2025-08-29 20:53:56.396407 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 20:53:56.396417 | orchestrator | Friday 29 August 2025 20:52:49 +0000 (0:00:00.444) 0:00:00.444 ********* 2025-08-29 20:53:56.396428 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-08-29 20:53:56.396439 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-08-29 20:53:56.396450 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-08-29 20:53:56.396467 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-08-29 20:53:56.396478 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-08-29 20:53:56.396488 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-08-29 20:53:56.396499 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-08-29 20:53:56.396510 | orchestrator | 2025-08-29 20:53:56.396520 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-08-29 20:53:56.396531 | orchestrator | 2025-08-29 20:53:56.396542 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-08-29 20:53:56.396552 | orchestrator | Friday 29 August 2025 20:52:50 +0000 (0:00:01.286) 0:00:01.731 ********* 2025-08-29 20:53:56.396575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:53:56.396588 | orchestrator | 2025-08-29 20:53:56.396599 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-08-29 20:53:56.396610 | orchestrator | Friday 29 August 2025 20:52:51 +0000 (0:00:01.349) 0:00:03.080 ********* 2025-08-29 20:53:56.396620 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:53:56.396632 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:53:56.396642 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:53:56.396653 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:53:56.396664 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:53:56.396878 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:53:56.396900 | orchestrator | ok: [testbed-manager] 2025-08-29 20:53:56.396911 | orchestrator | 2025-08-29 20:53:56.396922 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-08-29 20:53:56.396940 | orchestrator | Friday 29 August 2025 20:52:54 +0000 (0:00:03.136) 0:00:06.217 ********* 2025-08-29 20:53:56.396951 | orchestrator | ok: [testbed-manager] 2025-08-29 20:53:56.396962 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:53:56.396973 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:53:56.396983 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:53:56.396994 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:53:56.397005 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:53:56.397015 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:53:56.397026 | orchestrator | 2025-08-29 20:53:56.397037 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-08-29 20:53:56.397048 | orchestrator | Friday 29 August 2025 20:52:58 +0000 (0:00:03.447) 0:00:09.665 ********* 2025-08-29 20:53:56.397059 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:53:56.397070 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:53:56.397081 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:53:56.397091 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:53:56.397102 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.397113 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:53:56.397124 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:53:56.397134 | orchestrator | 2025-08-29 20:53:56.397145 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-08-29 20:53:56.397156 | orchestrator | Friday 29 August 2025 20:53:00 +0000 (0:00:01.743) 0:00:11.409 ********* 2025-08-29 20:53:56.397167 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:53:56.397177 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:53:56.397188 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:53:56.397199 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:53:56.397210 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:53:56.397220 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:53:56.397231 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.397242 | orchestrator | 2025-08-29 20:53:56.397252 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-08-29 20:53:56.397262 | orchestrator | Friday 29 August 2025 20:53:10 +0000 (0:00:10.031) 0:00:21.440 ********* 2025-08-29 20:53:56.397279 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:53:56.397289 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:53:56.397299 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:53:56.397308 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:53:56.397318 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:53:56.397327 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:53:56.397336 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.397346 | orchestrator | 2025-08-29 20:53:56.397355 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-08-29 20:53:56.397365 | orchestrator | Friday 29 August 2025 20:53:35 +0000 (0:00:25.617) 0:00:47.057 ********* 2025-08-29 20:53:56.397375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:53:56.397386 | orchestrator | 2025-08-29 20:53:56.397395 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-08-29 20:53:56.397405 | orchestrator | Friday 29 August 2025 20:53:36 +0000 (0:00:01.187) 0:00:48.245 ********* 2025-08-29 20:53:56.397415 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-08-29 20:53:56.397425 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-08-29 20:53:56.397434 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-08-29 20:53:56.397444 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-08-29 20:53:56.397453 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-08-29 20:53:56.397463 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-08-29 20:53:56.397473 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-08-29 20:53:56.397483 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-08-29 20:53:56.397492 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-08-29 20:53:56.397501 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-08-29 20:53:56.397511 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-08-29 20:53:56.397521 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-08-29 20:53:56.397530 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-08-29 20:53:56.397539 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-08-29 20:53:56.397549 | orchestrator | 2025-08-29 20:53:56.397558 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-08-29 20:53:56.397568 | orchestrator | Friday 29 August 2025 20:53:40 +0000 (0:00:03.982) 0:00:52.227 ********* 2025-08-29 20:53:56.397578 | orchestrator | ok: [testbed-manager] 2025-08-29 20:53:56.397588 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:53:56.397597 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:53:56.397607 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:53:56.397616 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:53:56.397626 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:53:56.397635 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:53:56.397644 | orchestrator | 2025-08-29 20:53:56.397654 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-08-29 20:53:56.397664 | orchestrator | Friday 29 August 2025 20:53:42 +0000 (0:00:01.275) 0:00:53.502 ********* 2025-08-29 20:53:56.397673 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:53:56.397683 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.397692 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:53:56.397702 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:53:56.397711 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:53:56.397721 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:53:56.397730 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:53:56.397740 | orchestrator | 2025-08-29 20:53:56.397749 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-08-29 20:53:56.397766 | orchestrator | Friday 29 August 2025 20:53:43 +0000 (0:00:01.662) 0:00:55.165 ********* 2025-08-29 20:53:56.397782 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:53:56.397791 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:53:56.397801 | orchestrator | ok: [testbed-manager] 2025-08-29 20:53:56.397811 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:53:56.397836 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:53:56.397854 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:53:56.397864 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:53:56.397874 | orchestrator | 2025-08-29 20:53:56.397884 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-08-29 20:53:56.397894 | orchestrator | Friday 29 August 2025 20:53:45 +0000 (0:00:01.599) 0:00:56.764 ********* 2025-08-29 20:53:56.397903 | orchestrator | ok: [testbed-manager] 2025-08-29 20:53:56.397913 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:53:56.397922 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:53:56.397932 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:53:56.397941 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:53:56.397951 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:53:56.397960 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:53:56.397970 | orchestrator | 2025-08-29 20:53:56.397979 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-08-29 20:53:56.397989 | orchestrator | Friday 29 August 2025 20:53:47 +0000 (0:00:02.505) 0:00:59.270 ********* 2025-08-29 20:53:56.397999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-08-29 20:53:56.398010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:53:56.398061 | orchestrator | 2025-08-29 20:53:56.398071 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-08-29 20:53:56.398081 | orchestrator | Friday 29 August 2025 20:53:49 +0000 (0:00:01.754) 0:01:01.024 ********* 2025-08-29 20:53:56.398091 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.398100 | orchestrator | 2025-08-29 20:53:56.398110 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-08-29 20:53:56.398120 | orchestrator | Friday 29 August 2025 20:53:51 +0000 (0:00:01.916) 0:01:02.941 ********* 2025-08-29 20:53:56.398129 | orchestrator | changed: [testbed-manager] 2025-08-29 20:53:56.398139 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:53:56.398149 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:53:56.398158 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:53:56.398168 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:53:56.398177 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:53:56.398187 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:53:56.398196 | orchestrator | 2025-08-29 20:53:56.398206 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:53:56.398216 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:56.398226 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:56.398235 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:56.398245 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:56.398255 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:56.398265 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:56.398284 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:53:56.398294 | orchestrator | 2025-08-29 20:53:56.398303 | orchestrator | 2025-08-29 20:53:56.398313 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:53:56.398322 | orchestrator | Friday 29 August 2025 20:53:54 +0000 (0:00:03.164) 0:01:06.105 ********* 2025-08-29 20:53:56.398332 | orchestrator | =============================================================================== 2025-08-29 20:53:56.398341 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 25.62s 2025-08-29 20:53:56.398351 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.03s 2025-08-29 20:53:56.398360 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.98s 2025-08-29 20:53:56.398370 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.45s 2025-08-29 20:53:56.398379 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.16s 2025-08-29 20:53:56.398389 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.14s 2025-08-29 20:53:56.398398 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.51s 2025-08-29 20:53:56.398408 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.92s 2025-08-29 20:53:56.398417 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.75s 2025-08-29 20:53:56.398427 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.74s 2025-08-29 20:53:56.398436 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.66s 2025-08-29 20:53:56.398452 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.60s 2025-08-29 20:53:56.398462 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.35s 2025-08-29 20:53:56.398472 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.29s 2025-08-29 20:53:56.398485 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.28s 2025-08-29 20:53:56.398496 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.19s 2025-08-29 20:53:56.398505 | orchestrator | 2025-08-29 20:53:56 | INFO  | Task e7523fd1-77de-4761-94da-632c432dbf94 is in state SUCCESS 2025-08-29 20:53:56.398515 | orchestrator | 2025-08-29 20:53:56 | INFO  | Task dbf1ec19-c700-435a-bbb1-876412222811 is in state SUCCESS 2025-08-29 20:53:56.398525 | orchestrator | 2025-08-29 20:53:56 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:56.398535 | orchestrator | 2025-08-29 20:53:56 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:56.399034 | orchestrator | 2025-08-29 20:53:56 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:56.399111 | orchestrator | 2025-08-29 20:53:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:53:59.430210 | orchestrator | 2025-08-29 20:53:59 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:53:59.430930 | orchestrator | 2025-08-29 20:53:59 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:53:59.433311 | orchestrator | 2025-08-29 20:53:59 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:53:59.433347 | orchestrator | 2025-08-29 20:53:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:02.482396 | orchestrator | 2025-08-29 20:54:02 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:02.484689 | orchestrator | 2025-08-29 20:54:02 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:02.487304 | orchestrator | 2025-08-29 20:54:02 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:02.487360 | orchestrator | 2025-08-29 20:54:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:05.528573 | orchestrator | 2025-08-29 20:54:05 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:05.528995 | orchestrator | 2025-08-29 20:54:05 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:05.529736 | orchestrator | 2025-08-29 20:54:05 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:05.529868 | orchestrator | 2025-08-29 20:54:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:08.569950 | orchestrator | 2025-08-29 20:54:08 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:08.571281 | orchestrator | 2025-08-29 20:54:08 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:08.572369 | orchestrator | 2025-08-29 20:54:08 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:08.572401 | orchestrator | 2025-08-29 20:54:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:11.605193 | orchestrator | 2025-08-29 20:54:11 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:11.605455 | orchestrator | 2025-08-29 20:54:11 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:11.606758 | orchestrator | 2025-08-29 20:54:11 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:11.606785 | orchestrator | 2025-08-29 20:54:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:14.641466 | orchestrator | 2025-08-29 20:54:14 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:14.642169 | orchestrator | 2025-08-29 20:54:14 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:14.643027 | orchestrator | 2025-08-29 20:54:14 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:14.643060 | orchestrator | 2025-08-29 20:54:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:17.679243 | orchestrator | 2025-08-29 20:54:17 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:17.679921 | orchestrator | 2025-08-29 20:54:17 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:17.680672 | orchestrator | 2025-08-29 20:54:17 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:17.681104 | orchestrator | 2025-08-29 20:54:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:20.733153 | orchestrator | 2025-08-29 20:54:20 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:20.733531 | orchestrator | 2025-08-29 20:54:20 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:20.734368 | orchestrator | 2025-08-29 20:54:20 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:20.734567 | orchestrator | 2025-08-29 20:54:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:23.776161 | orchestrator | 2025-08-29 20:54:23 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:23.776498 | orchestrator | 2025-08-29 20:54:23 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:23.777470 | orchestrator | 2025-08-29 20:54:23 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:23.777569 | orchestrator | 2025-08-29 20:54:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:26.812051 | orchestrator | 2025-08-29 20:54:26 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:26.814349 | orchestrator | 2025-08-29 20:54:26 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:26.816389 | orchestrator | 2025-08-29 20:54:26 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:26.816423 | orchestrator | 2025-08-29 20:54:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:29.850629 | orchestrator | 2025-08-29 20:54:29 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:29.852366 | orchestrator | 2025-08-29 20:54:29 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:29.856668 | orchestrator | 2025-08-29 20:54:29 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:29.857043 | orchestrator | 2025-08-29 20:54:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:32.902984 | orchestrator | 2025-08-29 20:54:32 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:32.904402 | orchestrator | 2025-08-29 20:54:32 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:32.906002 | orchestrator | 2025-08-29 20:54:32 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:32.906282 | orchestrator | 2025-08-29 20:54:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:35.951338 | orchestrator | 2025-08-29 20:54:35 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:35.951999 | orchestrator | 2025-08-29 20:54:35 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:35.953461 | orchestrator | 2025-08-29 20:54:35 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:35.953522 | orchestrator | 2025-08-29 20:54:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:38.995328 | orchestrator | 2025-08-29 20:54:38 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:38.998597 | orchestrator | 2025-08-29 20:54:38 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:39.000794 | orchestrator | 2025-08-29 20:54:38 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:39.001708 | orchestrator | 2025-08-29 20:54:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:42.051122 | orchestrator | 2025-08-29 20:54:42 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:42.052665 | orchestrator | 2025-08-29 20:54:42 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:42.055170 | orchestrator | 2025-08-29 20:54:42 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:42.055220 | orchestrator | 2025-08-29 20:54:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:45.092692 | orchestrator | 2025-08-29 20:54:45 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:45.094467 | orchestrator | 2025-08-29 20:54:45 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:45.096030 | orchestrator | 2025-08-29 20:54:45 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:45.096091 | orchestrator | 2025-08-29 20:54:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:48.142339 | orchestrator | 2025-08-29 20:54:48 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:48.142708 | orchestrator | 2025-08-29 20:54:48 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:48.143652 | orchestrator | 2025-08-29 20:54:48 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:48.143684 | orchestrator | 2025-08-29 20:54:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:51.182754 | orchestrator | 2025-08-29 20:54:51 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:51.183597 | orchestrator | 2025-08-29 20:54:51 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:51.184789 | orchestrator | 2025-08-29 20:54:51 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:51.185149 | orchestrator | 2025-08-29 20:54:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:54.222611 | orchestrator | 2025-08-29 20:54:54 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:54.224101 | orchestrator | 2025-08-29 20:54:54 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:54.225480 | orchestrator | 2025-08-29 20:54:54 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:54.225510 | orchestrator | 2025-08-29 20:54:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:54:57.275722 | orchestrator | 2025-08-29 20:54:57 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:54:57.276471 | orchestrator | 2025-08-29 20:54:57 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:54:57.278444 | orchestrator | 2025-08-29 20:54:57 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:54:57.278470 | orchestrator | 2025-08-29 20:54:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:00.312545 | orchestrator | 2025-08-29 20:55:00 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:00.316884 | orchestrator | 2025-08-29 20:55:00 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:00.317559 | orchestrator | 2025-08-29 20:55:00 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:55:00.317880 | orchestrator | 2025-08-29 20:55:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:03.355294 | orchestrator | 2025-08-29 20:55:03 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:03.356227 | orchestrator | 2025-08-29 20:55:03 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:03.358305 | orchestrator | 2025-08-29 20:55:03 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:55:03.358384 | orchestrator | 2025-08-29 20:55:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:06.388277 | orchestrator | 2025-08-29 20:55:06 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:06.389740 | orchestrator | 2025-08-29 20:55:06 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:06.391182 | orchestrator | 2025-08-29 20:55:06 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:55:06.391281 | orchestrator | 2025-08-29 20:55:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:09.428539 | orchestrator | 2025-08-29 20:55:09 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:09.429988 | orchestrator | 2025-08-29 20:55:09 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:09.431610 | orchestrator | 2025-08-29 20:55:09 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:55:09.431730 | orchestrator | 2025-08-29 20:55:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:12.494522 | orchestrator | 2025-08-29 20:55:12 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:12.496272 | orchestrator | 2025-08-29 20:55:12 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:12.497830 | orchestrator | 2025-08-29 20:55:12 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:55:12.497974 | orchestrator | 2025-08-29 20:55:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:15.549472 | orchestrator | 2025-08-29 20:55:15 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:15.553151 | orchestrator | 2025-08-29 20:55:15 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:15.554888 | orchestrator | 2025-08-29 20:55:15 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:55:15.554924 | orchestrator | 2025-08-29 20:55:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:18.604715 | orchestrator | 2025-08-29 20:55:18 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:18.605940 | orchestrator | 2025-08-29 20:55:18 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:18.608071 | orchestrator | 2025-08-29 20:55:18 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:55:18.608097 | orchestrator | 2025-08-29 20:55:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:21.644682 | orchestrator | 2025-08-29 20:55:21 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:21.646420 | orchestrator | 2025-08-29 20:55:21 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:21.648246 | orchestrator | 2025-08-29 20:55:21 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:55:21.648324 | orchestrator | 2025-08-29 20:55:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:24.699400 | orchestrator | 2025-08-29 20:55:24 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:24.700711 | orchestrator | 2025-08-29 20:55:24 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:24.705274 | orchestrator | 2025-08-29 20:55:24 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:55:24.706046 | orchestrator | 2025-08-29 20:55:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:27.751633 | orchestrator | 2025-08-29 20:55:27 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:27.752559 | orchestrator | 2025-08-29 20:55:27 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:27.755227 | orchestrator | 2025-08-29 20:55:27 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:55:27.755253 | orchestrator | 2025-08-29 20:55:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:30.804520 | orchestrator | 2025-08-29 20:55:30 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:30.806478 | orchestrator | 2025-08-29 20:55:30 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:30.808226 | orchestrator | 2025-08-29 20:55:30 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state STARTED 2025-08-29 20:55:30.808266 | orchestrator | 2025-08-29 20:55:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:33.851274 | orchestrator | 2025-08-29 20:55:33 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:33.851838 | orchestrator | 2025-08-29 20:55:33 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:33.852658 | orchestrator | 2025-08-29 20:55:33 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:55:33.853665 | orchestrator | 2025-08-29 20:55:33 | INFO  | Task 62f350f9-9d8a-4c97-a892-7a77ca98e758 is in state STARTED 2025-08-29 20:55:33.860245 | orchestrator | 2025-08-29 20:55:33 | INFO  | Task 4aa3a55a-efc9-4f18-97a5-8550d96907ff is in state SUCCESS 2025-08-29 20:55:33.860553 | orchestrator | 2025-08-29 20:55:33.860613 | orchestrator | 2025-08-29 20:55:33.860702 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-08-29 20:55:33.860716 | orchestrator | 2025-08-29 20:55:33.860746 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-08-29 20:55:33.860759 | orchestrator | Friday 29 August 2025 20:53:03 +0000 (0:00:00.181) 0:00:00.181 ********* 2025-08-29 20:55:33.860771 | orchestrator | ok: [testbed-manager] 2025-08-29 20:55:33.860823 | orchestrator | 2025-08-29 20:55:33.860845 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-08-29 20:55:33.860857 | orchestrator | Friday 29 August 2025 20:53:05 +0000 (0:00:01.693) 0:00:01.875 ********* 2025-08-29 20:55:33.860869 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-08-29 20:55:33.860880 | orchestrator | 2025-08-29 20:55:33.860955 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-08-29 20:55:33.860967 | orchestrator | Friday 29 August 2025 20:53:06 +0000 (0:00:00.695) 0:00:02.571 ********* 2025-08-29 20:55:33.860978 | orchestrator | changed: [testbed-manager] 2025-08-29 20:55:33.860989 | orchestrator | 2025-08-29 20:55:33.861001 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-08-29 20:55:33.861029 | orchestrator | Friday 29 August 2025 20:53:07 +0000 (0:00:01.286) 0:00:03.857 ********* 2025-08-29 20:55:33.861041 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-08-29 20:55:33.861052 | orchestrator | ok: [testbed-manager] 2025-08-29 20:55:33.861062 | orchestrator | 2025-08-29 20:55:33.861073 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-08-29 20:55:33.861084 | orchestrator | Friday 29 August 2025 20:53:50 +0000 (0:00:43.131) 0:00:46.988 ********* 2025-08-29 20:55:33.861095 | orchestrator | changed: [testbed-manager] 2025-08-29 20:55:33.861123 | orchestrator | 2025-08-29 20:55:33.861134 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:55:33.861146 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:55:33.861159 | orchestrator | 2025-08-29 20:55:33.861169 | orchestrator | 2025-08-29 20:55:33.861180 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:55:33.861191 | orchestrator | Friday 29 August 2025 20:53:54 +0000 (0:00:03.798) 0:00:50.787 ********* 2025-08-29 20:55:33.861202 | orchestrator | =============================================================================== 2025-08-29 20:55:33.861213 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 43.13s 2025-08-29 20:55:33.861224 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.80s 2025-08-29 20:55:33.861235 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.69s 2025-08-29 20:55:33.861268 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.29s 2025-08-29 20:55:33.861279 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.70s 2025-08-29 20:55:33.861290 | orchestrator | 2025-08-29 20:55:33.863485 | orchestrator | 2025-08-29 20:55:33.863524 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-08-29 20:55:33.863536 | orchestrator | 2025-08-29 20:55:33.863547 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 20:55:33.863558 | orchestrator | Friday 29 August 2025 20:52:41 +0000 (0:00:00.212) 0:00:00.212 ********* 2025-08-29 20:55:33.863569 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:55:33.863581 | orchestrator | 2025-08-29 20:55:33.863592 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-08-29 20:55:33.863603 | orchestrator | Friday 29 August 2025 20:52:42 +0000 (0:00:01.094) 0:00:01.307 ********* 2025-08-29 20:55:33.863614 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 20:55:33.863625 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 20:55:33.863636 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 20:55:33.863647 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 20:55:33.863658 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 20:55:33.863668 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 20:55:33.863679 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 20:55:33.863690 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 20:55:33.863701 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 20:55:33.863712 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 20:55:33.863723 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 20:55:33.863734 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 20:55:33.863744 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 20:55:33.863755 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 20:55:33.863810 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 20:55:33.863822 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 20:55:33.863833 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 20:55:33.863844 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 20:55:33.863854 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 20:55:33.863865 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 20:55:33.863876 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 20:55:33.863910 | orchestrator | 2025-08-29 20:55:33.863922 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 20:55:33.863932 | orchestrator | Friday 29 August 2025 20:52:47 +0000 (0:00:04.439) 0:00:05.746 ********* 2025-08-29 20:55:33.863954 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:55:33.863967 | orchestrator | 2025-08-29 20:55:33.863992 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-08-29 20:55:33.864003 | orchestrator | Friday 29 August 2025 20:52:48 +0000 (0:00:01.148) 0:00:06.895 ********* 2025-08-29 20:55:33.864034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.864050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.864077 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.864092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.864105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.864118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.864183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.864204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864217 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864301 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864354 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864367 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864379 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.864392 | orchestrator | 2025-08-29 20:55:33.864405 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-08-29 20:55:33.864418 | orchestrator | Friday 29 August 2025 20:52:54 +0000 (0:00:05.903) 0:00:12.798 ********* 2025-08-29 20:55:33.864438 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.864454 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864477 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:55:33.864495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.864507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864529 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:55:33.864541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.864553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.864574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.864628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864670 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:55:33.864681 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:55:33.864692 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:55:33.864703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.864714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864736 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:55:33.864846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.864872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864899 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:55:33.864919 | orchestrator | 2025-08-29 20:55:33.864930 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-08-29 20:55:33.864940 | orchestrator | Friday 29 August 2025 20:52:55 +0000 (0:00:01.006) 0:00:13.805 ********* 2025-08-29 20:55:33.864952 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.864963 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864980 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.864992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.865011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865035 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:55:33.865045 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:55:33.865056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.865075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.865118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.865129 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:55:33.865141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865200 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:55:33.865211 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:55:33.865222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.865233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865260 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:55:33.865271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 20:55:33.865288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.865321 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:55:33.865332 | orchestrator | 2025-08-29 20:55:33.865343 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-08-29 20:55:33.865354 | orchestrator | Friday 29 August 2025 20:52:57 +0000 (0:00:02.631) 0:00:16.436 ********* 2025-08-29 20:55:33.865363 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:55:33.865373 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:55:33.865383 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:55:33.865393 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:55:33.865402 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:55:33.865412 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:55:33.865421 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:55:33.865431 | orchestrator | 2025-08-29 20:55:33.865440 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-08-29 20:55:33.865450 | orchestrator | Friday 29 August 2025 20:52:58 +0000 (0:00:01.153) 0:00:17.590 ********* 2025-08-29 20:55:33.865460 | orchestrator | skipping: [testbed-manager] 2025-08-29 20:55:33.865469 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:55:33.865478 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:55:33.865488 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:55:33.865497 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:55:33.865507 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:55:33.865516 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:55:33.865526 | orchestrator | 2025-08-29 20:55:33.865535 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-08-29 20:55:33.865545 | orchestrator | Friday 29 August 2025 20:53:00 +0000 (0:00:01.805) 0:00:19.395 ********* 2025-08-29 20:55:33.865556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.865566 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.865580 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.865590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.865612 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.865623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.865643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.865653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865698 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865709 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865719 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865831 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.865880 | orchestrator | 2025-08-29 20:55:33.865890 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-08-29 20:55:33.865899 | orchestrator | Friday 29 August 2025 20:53:06 +0000 (0:00:05.995) 0:00:25.391 ********* 2025-08-29 20:55:33.865909 | orchestrator | [WARNING]: Skipped 2025-08-29 20:55:33.865920 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-08-29 20:55:33.865930 | orchestrator | to this access issue: 2025-08-29 20:55:33.865939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-08-29 20:55:33.865949 | orchestrator | directory 2025-08-29 20:55:33.865958 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 20:55:33.865968 | orchestrator | 2025-08-29 20:55:33.865977 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-08-29 20:55:33.865987 | orchestrator | Friday 29 August 2025 20:53:07 +0000 (0:00:01.178) 0:00:26.569 ********* 2025-08-29 20:55:33.865996 | orchestrator | [WARNING]: Skipped 2025-08-29 20:55:33.866006 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-08-29 20:55:33.866062 | orchestrator | to this access issue: 2025-08-29 20:55:33.866076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-08-29 20:55:33.866084 | orchestrator | directory 2025-08-29 20:55:33.866092 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 20:55:33.866100 | orchestrator | 2025-08-29 20:55:33.866107 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-08-29 20:55:33.866115 | orchestrator | Friday 29 August 2025 20:53:08 +0000 (0:00:00.869) 0:00:27.439 ********* 2025-08-29 20:55:33.866123 | orchestrator | [WARNING]: Skipped 2025-08-29 20:55:33.866131 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-08-29 20:55:33.866139 | orchestrator | to this access issue: 2025-08-29 20:55:33.866146 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-08-29 20:55:33.866154 | orchestrator | directory 2025-08-29 20:55:33.866162 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 20:55:33.866170 | orchestrator | 2025-08-29 20:55:33.866178 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-08-29 20:55:33.866186 | orchestrator | Friday 29 August 2025 20:53:09 +0000 (0:00:01.026) 0:00:28.468 ********* 2025-08-29 20:55:33.866194 | orchestrator | [WARNING]: Skipped 2025-08-29 20:55:33.866202 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-08-29 20:55:33.866216 | orchestrator | to this access issue: 2025-08-29 20:55:33.866224 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-08-29 20:55:33.866232 | orchestrator | directory 2025-08-29 20:55:33.866240 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 20:55:33.866247 | orchestrator | 2025-08-29 20:55:33.866255 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-08-29 20:55:33.866263 | orchestrator | Friday 29 August 2025 20:53:10 +0000 (0:00:00.754) 0:00:29.222 ********* 2025-08-29 20:55:33.866271 | orchestrator | changed: [testbed-manager] 2025-08-29 20:55:33.866279 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:55:33.866287 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:55:33.866294 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:55:33.866302 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:55:33.866310 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:55:33.866322 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:55:33.866330 | orchestrator | 2025-08-29 20:55:33.866338 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-08-29 20:55:33.866345 | orchestrator | Friday 29 August 2025 20:53:13 +0000 (0:00:03.274) 0:00:32.496 ********* 2025-08-29 20:55:33.866353 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 20:55:33.866361 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 20:55:33.866369 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 20:55:33.866377 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 20:55:33.866385 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 20:55:33.866393 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 20:55:33.866400 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 20:55:33.866408 | orchestrator | 2025-08-29 20:55:33.866416 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-08-29 20:55:33.866424 | orchestrator | Friday 29 August 2025 20:53:16 +0000 (0:00:02.594) 0:00:35.091 ********* 2025-08-29 20:55:33.866432 | orchestrator | changed: [testbed-manager] 2025-08-29 20:55:33.866440 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:55:33.866447 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:55:33.866455 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:55:33.866467 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:55:33.866476 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:55:33.866483 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:55:33.866491 | orchestrator | 2025-08-29 20:55:33.866499 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-08-29 20:55:33.866507 | orchestrator | Friday 29 August 2025 20:53:19 +0000 (0:00:03.092) 0:00:38.184 ********* 2025-08-29 20:55:33.866515 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.866524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.866538 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.866557 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.866566 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.866574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.866591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.866600 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.866609 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.866623 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.866632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.866640 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.866655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.866663 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.866678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.866687 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.866700 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 20:55:33.866709 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.866717 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.866725 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.866737 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.866745 | orchestrator | 2025-08-29 20:55:33.866754 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-08-29 20:55:33.866761 | orchestrator | Friday 29 August 2025 20:53:21 +0000 (0:00:02.506) 0:00:40.691 ********* 2025-08-29 20:55:33.866769 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 20:55:33.866777 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 20:55:33.866808 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 20:55:33.866819 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 20:55:33.866830 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 20:55:33.866842 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 20:55:33.866854 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 20:55:33.866868 | orchestrator | 2025-08-29 20:55:33.866886 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-08-29 20:55:33.866902 | orchestrator | Friday 29 August 2025 20:53:25 +0000 (0:00:03.477) 0:00:44.168 ********* 2025-08-29 20:55:33.866915 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 20:55:33.866935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 20:55:33.866943 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 20:55:33.866951 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 20:55:33.866959 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 20:55:33.866966 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 20:55:33.866974 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 20:55:33.866982 | orchestrator | 2025-08-29 20:55:33.866990 | orchestrator | TASK [common : Check common containers] **************************************** 2025-08-29 20:55:33.866997 | orchestrator | Friday 29 August 2025 20:53:27 +0000 (0:00:02.360) 0:00:46.528 ********* 2025-08-29 20:55:33.867006 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.867014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.867022 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.867035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.867043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.867057 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.867080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 20:55:33.867088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867139 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:55:33.867223 | orchestrator | 2025-08-29 20:55:33.867231 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-08-29 20:55:33.867239 | orchestrator | Friday 29 August 2025 20:53:31 +0000 (0:00:03.600) 0:00:50.129 ********* 2025-08-29 20:55:33.867252 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:55:33.867260 | orchestrator | changed: [testbed-manager] 2025-08-29 20:55:33.867268 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:55:33.867276 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:55:33.867284 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:55:33.867291 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:55:33.867299 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:55:33.867307 | orchestrator | 2025-08-29 20:55:33.867315 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-08-29 20:55:33.867322 | orchestrator | Friday 29 August 2025 20:53:33 +0000 (0:00:01.634) 0:00:51.763 ********* 2025-08-29 20:55:33.867330 | orchestrator | changed: [testbed-manager] 2025-08-29 20:55:33.867338 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:55:33.867346 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:55:33.867353 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:55:33.867361 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:55:33.867369 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:55:33.867376 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:55:33.867384 | orchestrator | 2025-08-29 20:55:33.867392 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 20:55:33.867400 | orchestrator | Friday 29 August 2025 20:53:34 +0000 (0:00:01.324) 0:00:53.088 ********* 2025-08-29 20:55:33.867408 | orchestrator | 2025-08-29 20:55:33.867415 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 20:55:33.867423 | orchestrator | Friday 29 August 2025 20:53:34 +0000 (0:00:00.059) 0:00:53.148 ********* 2025-08-29 20:55:33.867431 | orchestrator | 2025-08-29 20:55:33.867439 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 20:55:33.867446 | orchestrator | Friday 29 August 2025 20:53:34 +0000 (0:00:00.060) 0:00:53.208 ********* 2025-08-29 20:55:33.867454 | orchestrator | 2025-08-29 20:55:33.867462 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 20:55:33.867469 | orchestrator | Friday 29 August 2025 20:53:34 +0000 (0:00:00.158) 0:00:53.367 ********* 2025-08-29 20:55:33.867478 | orchestrator | 2025-08-29 20:55:33.867485 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 20:55:33.867493 | orchestrator | Friday 29 August 2025 20:53:34 +0000 (0:00:00.059) 0:00:53.426 ********* 2025-08-29 20:55:33.867501 | orchestrator | 2025-08-29 20:55:33.867508 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 20:55:33.867516 | orchestrator | Friday 29 August 2025 20:53:34 +0000 (0:00:00.057) 0:00:53.484 ********* 2025-08-29 20:55:33.867524 | orchestrator | 2025-08-29 20:55:33.867532 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 20:55:33.867540 | orchestrator | Friday 29 August 2025 20:53:34 +0000 (0:00:00.057) 0:00:53.542 ********* 2025-08-29 20:55:33.867548 | orchestrator | 2025-08-29 20:55:33.867555 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-08-29 20:55:33.867563 | orchestrator | Friday 29 August 2025 20:53:34 +0000 (0:00:00.077) 0:00:53.619 ********* 2025-08-29 20:55:33.867571 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:55:33.867579 | orchestrator | changed: [testbed-manager] 2025-08-29 20:55:33.867591 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:55:33.867599 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:55:33.867607 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:55:33.867615 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:55:33.867622 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:55:33.867630 | orchestrator | 2025-08-29 20:55:33.867640 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-08-29 20:55:33.867652 | orchestrator | Friday 29 August 2025 20:54:18 +0000 (0:00:43.592) 0:01:37.211 ********* 2025-08-29 20:55:33.867666 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:55:33.867678 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:55:33.867691 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:55:33.867704 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:55:33.867716 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:55:33.867724 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:55:33.867732 | orchestrator | changed: [testbed-manager] 2025-08-29 20:55:33.867739 | orchestrator | 2025-08-29 20:55:33.867747 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-08-29 20:55:33.867759 | orchestrator | Friday 29 August 2025 20:55:18 +0000 (0:01:00.450) 0:02:37.661 ********* 2025-08-29 20:55:33.867767 | orchestrator | ok: [testbed-manager] 2025-08-29 20:55:33.867775 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:55:33.867799 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:55:33.867808 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:55:33.867815 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:55:33.867823 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:55:33.867831 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:55:33.867838 | orchestrator | 2025-08-29 20:55:33.867846 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-08-29 20:55:33.867854 | orchestrator | Friday 29 August 2025 20:55:21 +0000 (0:00:02.140) 0:02:39.802 ********* 2025-08-29 20:55:33.867862 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:55:33.867870 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:55:33.867877 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:55:33.867885 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:55:33.867893 | orchestrator | changed: [testbed-manager] 2025-08-29 20:55:33.867901 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:55:33.867908 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:55:33.867916 | orchestrator | 2025-08-29 20:55:33.867924 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:55:33.867933 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 20:55:33.867941 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 20:55:33.867949 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 20:55:33.867962 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 20:55:33.867971 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 20:55:33.867979 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 20:55:33.867987 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 20:55:33.867994 | orchestrator | 2025-08-29 20:55:33.868002 | orchestrator | 2025-08-29 20:55:33.868010 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:55:33.868025 | orchestrator | Friday 29 August 2025 20:55:31 +0000 (0:00:10.439) 0:02:50.241 ********* 2025-08-29 20:55:33.868032 | orchestrator | =============================================================================== 2025-08-29 20:55:33.868040 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 60.45s 2025-08-29 20:55:33.868048 | orchestrator | common : Restart fluentd container ------------------------------------- 43.59s 2025-08-29 20:55:33.868056 | orchestrator | common : Restart cron container ---------------------------------------- 10.44s 2025-08-29 20:55:33.868064 | orchestrator | common : Copying over config.json files for services -------------------- 6.00s 2025-08-29 20:55:33.868072 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.90s 2025-08-29 20:55:33.868079 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.44s 2025-08-29 20:55:33.868087 | orchestrator | common : Check common containers ---------------------------------------- 3.60s 2025-08-29 20:55:33.868095 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.48s 2025-08-29 20:55:33.868103 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.27s 2025-08-29 20:55:33.868110 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.09s 2025-08-29 20:55:33.868118 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.63s 2025-08-29 20:55:33.868126 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.60s 2025-08-29 20:55:33.868134 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.51s 2025-08-29 20:55:33.868142 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.36s 2025-08-29 20:55:33.868149 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.14s 2025-08-29 20:55:33.868157 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.81s 2025-08-29 20:55:33.868165 | orchestrator | common : Creating log volume -------------------------------------------- 1.63s 2025-08-29 20:55:33.868173 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.32s 2025-08-29 20:55:33.868181 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.18s 2025-08-29 20:55:33.868188 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.15s 2025-08-29 20:55:33.868196 | orchestrator | 2025-08-29 20:55:33 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:55:33.868204 | orchestrator | 2025-08-29 20:55:33 | INFO  | Task 13290af0-426a-4bb1-a664-ad2f2ff6e1b4 is in state STARTED 2025-08-29 20:55:33.868212 | orchestrator | 2025-08-29 20:55:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:36.896916 | orchestrator | 2025-08-29 20:55:36 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:36.897014 | orchestrator | 2025-08-29 20:55:36 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:36.897030 | orchestrator | 2025-08-29 20:55:36 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:55:36.897042 | orchestrator | 2025-08-29 20:55:36 | INFO  | Task 62f350f9-9d8a-4c97-a892-7a77ca98e758 is in state STARTED 2025-08-29 20:55:36.897053 | orchestrator | 2025-08-29 20:55:36 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:55:36.897064 | orchestrator | 2025-08-29 20:55:36 | INFO  | Task 13290af0-426a-4bb1-a664-ad2f2ff6e1b4 is in state STARTED 2025-08-29 20:55:36.897076 | orchestrator | 2025-08-29 20:55:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:39.917699 | orchestrator | 2025-08-29 20:55:39 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:39.918169 | orchestrator | 2025-08-29 20:55:39 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:39.918925 | orchestrator | 2025-08-29 20:55:39 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:55:39.919218 | orchestrator | 2025-08-29 20:55:39 | INFO  | Task 62f350f9-9d8a-4c97-a892-7a77ca98e758 is in state STARTED 2025-08-29 20:55:39.920288 | orchestrator | 2025-08-29 20:55:39 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:55:39.920995 | orchestrator | 2025-08-29 20:55:39 | INFO  | Task 13290af0-426a-4bb1-a664-ad2f2ff6e1b4 is in state STARTED 2025-08-29 20:55:39.921118 | orchestrator | 2025-08-29 20:55:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:42.941757 | orchestrator | 2025-08-29 20:55:42 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:42.946749 | orchestrator | 2025-08-29 20:55:42 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:42.947627 | orchestrator | 2025-08-29 20:55:42 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:55:42.947665 | orchestrator | 2025-08-29 20:55:42 | INFO  | Task 62f350f9-9d8a-4c97-a892-7a77ca98e758 is in state STARTED 2025-08-29 20:55:42.947685 | orchestrator | 2025-08-29 20:55:42 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:55:42.947704 | orchestrator | 2025-08-29 20:55:42 | INFO  | Task 13290af0-426a-4bb1-a664-ad2f2ff6e1b4 is in state STARTED 2025-08-29 20:55:42.947724 | orchestrator | 2025-08-29 20:55:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:46.013233 | orchestrator | 2025-08-29 20:55:45 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:46.013323 | orchestrator | 2025-08-29 20:55:45 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:46.013338 | orchestrator | 2025-08-29 20:55:45 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:55:46.013350 | orchestrator | 2025-08-29 20:55:45 | INFO  | Task 62f350f9-9d8a-4c97-a892-7a77ca98e758 is in state STARTED 2025-08-29 20:55:46.013362 | orchestrator | 2025-08-29 20:55:45 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:55:46.013373 | orchestrator | 2025-08-29 20:55:45 | INFO  | Task 13290af0-426a-4bb1-a664-ad2f2ff6e1b4 is in state STARTED 2025-08-29 20:55:46.013384 | orchestrator | 2025-08-29 20:55:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:49.024626 | orchestrator | 2025-08-29 20:55:49 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:49.027635 | orchestrator | 2025-08-29 20:55:49 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:49.031987 | orchestrator | 2025-08-29 20:55:49 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:55:49.032364 | orchestrator | 2025-08-29 20:55:49 | INFO  | Task 62f350f9-9d8a-4c97-a892-7a77ca98e758 is in state STARTED 2025-08-29 20:55:49.055607 | orchestrator | 2025-08-29 20:55:49 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:55:49.055658 | orchestrator | 2025-08-29 20:55:49 | INFO  | Task 13290af0-426a-4bb1-a664-ad2f2ff6e1b4 is in state SUCCESS 2025-08-29 20:55:49.055671 | orchestrator | 2025-08-29 20:55:49 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:55:49.055699 | orchestrator | 2025-08-29 20:55:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:52.076912 | orchestrator | 2025-08-29 20:55:52 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:52.077020 | orchestrator | 2025-08-29 20:55:52 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:52.077348 | orchestrator | 2025-08-29 20:55:52 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:55:52.080914 | orchestrator | 2025-08-29 20:55:52 | INFO  | Task 62f350f9-9d8a-4c97-a892-7a77ca98e758 is in state STARTED 2025-08-29 20:55:52.084203 | orchestrator | 2025-08-29 20:55:52 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:55:52.085057 | orchestrator | 2025-08-29 20:55:52 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:55:52.085084 | orchestrator | 2025-08-29 20:55:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:55.128953 | orchestrator | 2025-08-29 20:55:55 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:55.129035 | orchestrator | 2025-08-29 20:55:55 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:55.129586 | orchestrator | 2025-08-29 20:55:55 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:55:55.130079 | orchestrator | 2025-08-29 20:55:55 | INFO  | Task 62f350f9-9d8a-4c97-a892-7a77ca98e758 is in state STARTED 2025-08-29 20:55:55.130531 | orchestrator | 2025-08-29 20:55:55 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:55:55.131232 | orchestrator | 2025-08-29 20:55:55 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:55:55.131255 | orchestrator | 2025-08-29 20:55:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:55:58.152958 | orchestrator | 2025-08-29 20:55:58 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:55:58.153044 | orchestrator | 2025-08-29 20:55:58 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:55:58.153058 | orchestrator | 2025-08-29 20:55:58 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:55:58.153457 | orchestrator | 2025-08-29 20:55:58 | INFO  | Task 62f350f9-9d8a-4c97-a892-7a77ca98e758 is in state SUCCESS 2025-08-29 20:55:58.154522 | orchestrator | 2025-08-29 20:55:58.154553 | orchestrator | 2025-08-29 20:55:58.154592 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 20:55:58.154605 | orchestrator | 2025-08-29 20:55:58.154616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 20:55:58.154627 | orchestrator | Friday 29 August 2025 20:55:36 +0000 (0:00:00.363) 0:00:00.363 ********* 2025-08-29 20:55:58.154638 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:55:58.154650 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:55:58.154661 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:55:58.154672 | orchestrator | 2025-08-29 20:55:58.154682 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 20:55:58.154693 | orchestrator | Friday 29 August 2025 20:55:37 +0000 (0:00:00.267) 0:00:00.630 ********* 2025-08-29 20:55:58.154705 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-08-29 20:55:58.154716 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-08-29 20:55:58.154726 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-08-29 20:55:58.154737 | orchestrator | 2025-08-29 20:55:58.154748 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-08-29 20:55:58.154758 | orchestrator | 2025-08-29 20:55:58.154790 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-08-29 20:55:58.154802 | orchestrator | Friday 29 August 2025 20:55:37 +0000 (0:00:00.608) 0:00:01.239 ********* 2025-08-29 20:55:58.154813 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:55:58.154845 | orchestrator | 2025-08-29 20:55:58.154856 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-08-29 20:55:58.154867 | orchestrator | Friday 29 August 2025 20:55:38 +0000 (0:00:00.581) 0:00:01.821 ********* 2025-08-29 20:55:58.154878 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 20:55:58.154889 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 20:55:58.154899 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 20:55:58.154910 | orchestrator | 2025-08-29 20:55:58.154921 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-08-29 20:55:58.154932 | orchestrator | Friday 29 August 2025 20:55:39 +0000 (0:00:00.817) 0:00:02.638 ********* 2025-08-29 20:55:58.154942 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 20:55:58.154953 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 20:55:58.154964 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 20:55:58.154975 | orchestrator | 2025-08-29 20:55:58.154986 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-08-29 20:55:58.154996 | orchestrator | Friday 29 August 2025 20:55:41 +0000 (0:00:02.192) 0:00:04.831 ********* 2025-08-29 20:55:58.155007 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:55:58.155018 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:55:58.155029 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:55:58.155039 | orchestrator | 2025-08-29 20:55:58.155050 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-08-29 20:55:58.155061 | orchestrator | Friday 29 August 2025 20:55:43 +0000 (0:00:02.105) 0:00:06.937 ********* 2025-08-29 20:55:58.155072 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:55:58.155083 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:55:58.155094 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:55:58.155106 | orchestrator | 2025-08-29 20:55:58.155118 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:55:58.155131 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:55:58.155144 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:55:58.155170 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:55:58.155183 | orchestrator | 2025-08-29 20:55:58.155194 | orchestrator | 2025-08-29 20:55:58.155206 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:55:58.155218 | orchestrator | Friday 29 August 2025 20:55:46 +0000 (0:00:02.766) 0:00:09.703 ********* 2025-08-29 20:55:58.155231 | orchestrator | =============================================================================== 2025-08-29 20:55:58.155242 | orchestrator | memcached : Restart memcached container --------------------------------- 2.77s 2025-08-29 20:55:58.155255 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.19s 2025-08-29 20:55:58.155267 | orchestrator | memcached : Check memcached container ----------------------------------- 2.11s 2025-08-29 20:55:58.155279 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.82s 2025-08-29 20:55:58.155291 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-08-29 20:55:58.155302 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.58s 2025-08-29 20:55:58.155314 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-08-29 20:55:58.155326 | orchestrator | 2025-08-29 20:55:58.155338 | orchestrator | 2025-08-29 20:55:58.155350 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 20:55:58.155362 | orchestrator | 2025-08-29 20:55:58.155374 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 20:55:58.155392 | orchestrator | Friday 29 August 2025 20:55:37 +0000 (0:00:00.303) 0:00:00.303 ********* 2025-08-29 20:55:58.155403 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:55:58.155416 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:55:58.155428 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:55:58.155440 | orchestrator | 2025-08-29 20:55:58.155452 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 20:55:58.155475 | orchestrator | Friday 29 August 2025 20:55:37 +0000 (0:00:00.491) 0:00:00.795 ********* 2025-08-29 20:55:58.155487 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-08-29 20:55:58.155498 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-08-29 20:55:58.155509 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-08-29 20:55:58.155520 | orchestrator | 2025-08-29 20:55:58.155530 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-08-29 20:55:58.155541 | orchestrator | 2025-08-29 20:55:58.155552 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-08-29 20:55:58.155563 | orchestrator | Friday 29 August 2025 20:55:38 +0000 (0:00:00.597) 0:00:01.393 ********* 2025-08-29 20:55:58.155573 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:55:58.155584 | orchestrator | 2025-08-29 20:55:58.155595 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-08-29 20:55:58.155606 | orchestrator | Friday 29 August 2025 20:55:39 +0000 (0:00:00.893) 0:00:02.286 ********* 2025-08-29 20:55:58.155619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155716 | orchestrator | 2025-08-29 20:55:58.155727 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-08-29 20:55:58.155738 | orchestrator | Friday 29 August 2025 20:55:40 +0000 (0:00:01.754) 0:00:04.041 ********* 2025-08-29 20:55:58.155750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155855 | orchestrator | 2025-08-29 20:55:58.155866 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-08-29 20:55:58.155876 | orchestrator | Friday 29 August 2025 20:55:43 +0000 (0:00:02.964) 0:00:07.005 ********* 2025-08-29 20:55:58.155888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.155975 | orchestrator | 2025-08-29 20:55:58.155986 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-08-29 20:55:58.155997 | orchestrator | Friday 29 August 2025 20:55:47 +0000 (0:00:03.214) 0:00:10.220 ********* 2025-08-29 20:55:58.156008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.156020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.156036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.156048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.156065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.156081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 20:55:58.156093 | orchestrator | 2025-08-29 20:55:58.156104 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 20:55:58.156115 | orchestrator | Friday 29 August 2025 20:55:49 +0000 (0:00:02.194) 0:00:12.414 ********* 2025-08-29 20:55:58.156126 | orchestrator | 2025-08-29 20:55:58.156137 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 20:55:58.156148 | orchestrator | Friday 29 August 2025 20:55:49 +0000 (0:00:00.105) 0:00:12.520 ********* 2025-08-29 20:55:58.156159 | orchestrator | 2025-08-29 20:55:58.156170 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 20:55:58.156180 | orchestrator | Friday 29 August 2025 20:55:49 +0000 (0:00:00.119) 0:00:12.640 ********* 2025-08-29 20:55:58.156191 | orchestrator | 2025-08-29 20:55:58.156202 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-08-29 20:55:58.156213 | orchestrator | Friday 29 August 2025 20:55:49 +0000 (0:00:00.124) 0:00:12.764 ********* 2025-08-29 20:55:58.156224 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:55:58.156235 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:55:58.156246 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:55:58.156256 | orchestrator | 2025-08-29 20:55:58.156267 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-08-29 20:55:58.156278 | orchestrator | Friday 29 August 2025 20:55:54 +0000 (0:00:04.632) 0:00:17.396 ********* 2025-08-29 20:55:58.156288 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:55:58.156299 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:55:58.156310 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:55:58.156321 | orchestrator | 2025-08-29 20:55:58.156331 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:55:58.156342 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:55:58.156354 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:55:58.156370 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:55:58.156381 | orchestrator | 2025-08-29 20:55:58.156392 | orchestrator | 2025-08-29 20:55:58.156402 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:55:58.156413 | orchestrator | Friday 29 August 2025 20:55:57 +0000 (0:00:03.233) 0:00:20.630 ********* 2025-08-29 20:55:58.156428 | orchestrator | =============================================================================== 2025-08-29 20:55:58.156439 | orchestrator | redis : Restart redis container ----------------------------------------- 4.63s 2025-08-29 20:55:58.156450 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.23s 2025-08-29 20:55:58.156461 | orchestrator | redis : Copying over redis config files --------------------------------- 3.21s 2025-08-29 20:55:58.156472 | orchestrator | redis : Copying over default config.json files -------------------------- 2.96s 2025-08-29 20:55:58.156482 | orchestrator | redis : Check redis containers ------------------------------------------ 2.19s 2025-08-29 20:55:58.156493 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.75s 2025-08-29 20:55:58.156504 | orchestrator | redis : include_tasks --------------------------------------------------- 0.89s 2025-08-29 20:55:58.156514 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-08-29 20:55:58.156525 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-08-29 20:55:58.156536 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.35s 2025-08-29 20:55:58.156547 | orchestrator | 2025-08-29 20:55:58 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:55:58.156641 | orchestrator | 2025-08-29 20:55:58 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:55:58.156657 | orchestrator | 2025-08-29 20:55:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:01.180481 | orchestrator | 2025-08-29 20:56:01 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:56:01.181339 | orchestrator | 2025-08-29 20:56:01 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:01.183810 | orchestrator | 2025-08-29 20:56:01 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:01.185750 | orchestrator | 2025-08-29 20:56:01 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:01.186486 | orchestrator | 2025-08-29 20:56:01 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:01.186524 | orchestrator | 2025-08-29 20:56:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:04.223924 | orchestrator | 2025-08-29 20:56:04 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:56:04.224130 | orchestrator | 2025-08-29 20:56:04 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:04.225422 | orchestrator | 2025-08-29 20:56:04 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:04.225636 | orchestrator | 2025-08-29 20:56:04 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:04.226382 | orchestrator | 2025-08-29 20:56:04 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:04.226405 | orchestrator | 2025-08-29 20:56:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:07.312625 | orchestrator | 2025-08-29 20:56:07 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:56:07.312831 | orchestrator | 2025-08-29 20:56:07 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:07.313222 | orchestrator | 2025-08-29 20:56:07 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:07.314986 | orchestrator | 2025-08-29 20:56:07 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:07.315470 | orchestrator | 2025-08-29 20:56:07 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:07.315495 | orchestrator | 2025-08-29 20:56:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:10.350118 | orchestrator | 2025-08-29 20:56:10 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:56:10.350962 | orchestrator | 2025-08-29 20:56:10 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:10.351673 | orchestrator | 2025-08-29 20:56:10 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:10.353250 | orchestrator | 2025-08-29 20:56:10 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:10.353273 | orchestrator | 2025-08-29 20:56:10 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:10.353284 | orchestrator | 2025-08-29 20:56:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:13.430468 | orchestrator | 2025-08-29 20:56:13 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:56:13.456178 | orchestrator | 2025-08-29 20:56:13 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:13.456585 | orchestrator | 2025-08-29 20:56:13 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:13.460023 | orchestrator | 2025-08-29 20:56:13 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:13.463852 | orchestrator | 2025-08-29 20:56:13 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:13.463878 | orchestrator | 2025-08-29 20:56:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:16.552088 | orchestrator | 2025-08-29 20:56:16 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state STARTED 2025-08-29 20:56:16.554190 | orchestrator | 2025-08-29 20:56:16 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:16.556208 | orchestrator | 2025-08-29 20:56:16 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:16.558146 | orchestrator | 2025-08-29 20:56:16 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:16.560014 | orchestrator | 2025-08-29 20:56:16 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:16.560880 | orchestrator | 2025-08-29 20:56:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:19.593424 | orchestrator | 2025-08-29 20:56:19 | INFO  | Task d35709f1-abd8-4ffc-8773-1a56b1013ec2 is in state SUCCESS 2025-08-29 20:56:19.594641 | orchestrator | 2025-08-29 20:56:19.594697 | orchestrator | 2025-08-29 20:56:19.594718 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-08-29 20:56:19.594736 | orchestrator | 2025-08-29 20:56:19.594780 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-08-29 20:56:19.594800 | orchestrator | Friday 29 August 2025 20:52:42 +0000 (0:00:00.169) 0:00:00.169 ********* 2025-08-29 20:56:19.594818 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:56:19.594837 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:56:19.594865 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:56:19.594887 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.594905 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.594963 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.594993 | orchestrator | 2025-08-29 20:56:19.595017 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-08-29 20:56:19.595036 | orchestrator | Friday 29 August 2025 20:52:42 +0000 (0:00:00.660) 0:00:00.830 ********* 2025-08-29 20:56:19.595052 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.595071 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.595090 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.595107 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.595124 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.595142 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.595159 | orchestrator | 2025-08-29 20:56:19.595176 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-08-29 20:56:19.595193 | orchestrator | Friday 29 August 2025 20:52:43 +0000 (0:00:00.599) 0:00:01.429 ********* 2025-08-29 20:56:19.595211 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.595229 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.595248 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.595266 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.595285 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.595304 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.595324 | orchestrator | 2025-08-29 20:56:19.595344 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-08-29 20:56:19.595364 | orchestrator | Friday 29 August 2025 20:52:44 +0000 (0:00:00.667) 0:00:02.097 ********* 2025-08-29 20:56:19.595384 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.595404 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:56:19.595423 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:56:19.595443 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:56:19.595463 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.595482 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.595502 | orchestrator | 2025-08-29 20:56:19.595522 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-08-29 20:56:19.595541 | orchestrator | Friday 29 August 2025 20:52:46 +0000 (0:00:02.048) 0:00:04.145 ********* 2025-08-29 20:56:19.595562 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:56:19.595582 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:56:19.595601 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:56:19.595621 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.595640 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.595657 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.595675 | orchestrator | 2025-08-29 20:56:19.595693 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-08-29 20:56:19.595712 | orchestrator | Friday 29 August 2025 20:52:47 +0000 (0:00:01.254) 0:00:05.400 ********* 2025-08-29 20:56:19.595731 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:56:19.595773 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:56:19.595794 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:56:19.595814 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.595832 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.595850 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.595869 | orchestrator | 2025-08-29 20:56:19.595887 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-08-29 20:56:19.595905 | orchestrator | Friday 29 August 2025 20:52:48 +0000 (0:00:00.968) 0:00:06.369 ********* 2025-08-29 20:56:19.595924 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.595942 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.595960 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.595978 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.596011 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.596029 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.596046 | orchestrator | 2025-08-29 20:56:19.596075 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-08-29 20:56:19.596120 | orchestrator | Friday 29 August 2025 20:52:49 +0000 (0:00:00.630) 0:00:06.999 ********* 2025-08-29 20:56:19.596139 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.596157 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.596175 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.596193 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.596212 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.596237 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.596261 | orchestrator | 2025-08-29 20:56:19.596278 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-08-29 20:56:19.596301 | orchestrator | Friday 29 August 2025 20:52:50 +0000 (0:00:01.006) 0:00:08.005 ********* 2025-08-29 20:56:19.596324 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 20:56:19.596350 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 20:56:19.596374 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 20:56:19.596393 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.596411 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 20:56:19.596429 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 20:56:19.596448 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 20:56:19.596464 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.596482 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 20:56:19.596500 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 20:56:19.596539 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.596560 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 20:56:19.596579 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 20:56:19.596598 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.596617 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.596635 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 20:56:19.596655 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 20:56:19.596674 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.596692 | orchestrator | 2025-08-29 20:56:19.596710 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-08-29 20:56:19.596728 | orchestrator | Friday 29 August 2025 20:52:50 +0000 (0:00:00.746) 0:00:08.751 ********* 2025-08-29 20:56:19.596745 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.596823 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.596843 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.596860 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.596888 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.596907 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.596932 | orchestrator | 2025-08-29 20:56:19.596958 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-08-29 20:56:19.596976 | orchestrator | Friday 29 August 2025 20:52:52 +0000 (0:00:01.160) 0:00:09.912 ********* 2025-08-29 20:56:19.596998 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:56:19.597023 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:56:19.597046 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:56:19.597062 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.597077 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.597092 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.597112 | orchestrator | 2025-08-29 20:56:19.597139 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-08-29 20:56:19.597161 | orchestrator | Friday 29 August 2025 20:52:53 +0000 (0:00:01.148) 0:00:11.060 ********* 2025-08-29 20:56:19.597190 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:56:19.597205 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.597221 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:56:19.597247 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.597272 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:56:19.597290 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.597306 | orchestrator | 2025-08-29 20:56:19.597321 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-08-29 20:56:19.597340 | orchestrator | Friday 29 August 2025 20:52:59 +0000 (0:00:05.885) 0:00:16.946 ********* 2025-08-29 20:56:19.597368 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.597388 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.597404 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.597423 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.597446 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.597461 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.597476 | orchestrator | 2025-08-29 20:56:19.597492 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-08-29 20:56:19.597509 | orchestrator | Friday 29 August 2025 20:53:00 +0000 (0:00:01.716) 0:00:18.663 ********* 2025-08-29 20:56:19.597596 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.597628 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.597652 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.597669 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.597685 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.597701 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.597719 | orchestrator | 2025-08-29 20:56:19.597736 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-08-29 20:56:19.597775 | orchestrator | Friday 29 August 2025 20:53:03 +0000 (0:00:02.602) 0:00:21.266 ********* 2025-08-29 20:56:19.597793 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:56:19.597810 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:56:19.597827 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:56:19.597853 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.597869 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.597886 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.597903 | orchestrator | 2025-08-29 20:56:19.597920 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-08-29 20:56:19.597937 | orchestrator | Friday 29 August 2025 20:53:04 +0000 (0:00:01.120) 0:00:22.386 ********* 2025-08-29 20:56:19.597954 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-08-29 20:56:19.597971 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-08-29 20:56:19.597987 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-08-29 20:56:19.598004 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-08-29 20:56:19.598068 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-08-29 20:56:19.598088 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-08-29 20:56:19.598105 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-08-29 20:56:19.598122 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-08-29 20:56:19.598138 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-08-29 20:56:19.598155 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-08-29 20:56:19.598172 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-08-29 20:56:19.598189 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-08-29 20:56:19.598205 | orchestrator | 2025-08-29 20:56:19.598237 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-08-29 20:56:19.598254 | orchestrator | Friday 29 August 2025 20:53:06 +0000 (0:00:02.024) 0:00:24.411 ********* 2025-08-29 20:56:19.598271 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:56:19.598287 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:56:19.598304 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:56:19.598332 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.598348 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.598365 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.598381 | orchestrator | 2025-08-29 20:56:19.598412 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-08-29 20:56:19.598429 | orchestrator | 2025-08-29 20:56:19.598446 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-08-29 20:56:19.598463 | orchestrator | Friday 29 August 2025 20:53:08 +0000 (0:00:01.865) 0:00:26.277 ********* 2025-08-29 20:56:19.598479 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.598496 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.598513 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.598529 | orchestrator | 2025-08-29 20:56:19.598546 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-08-29 20:56:19.598563 | orchestrator | Friday 29 August 2025 20:53:09 +0000 (0:00:00.907) 0:00:27.184 ********* 2025-08-29 20:56:19.598579 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.598596 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.598613 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.598629 | orchestrator | 2025-08-29 20:56:19.598645 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-08-29 20:56:19.598662 | orchestrator | Friday 29 August 2025 20:53:10 +0000 (0:00:01.148) 0:00:28.332 ********* 2025-08-29 20:56:19.598678 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.598694 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.598711 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.598727 | orchestrator | 2025-08-29 20:56:19.598744 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-08-29 20:56:19.598779 | orchestrator | Friday 29 August 2025 20:53:11 +0000 (0:00:01.239) 0:00:29.572 ********* 2025-08-29 20:56:19.598796 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.598812 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.598829 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.598845 | orchestrator | 2025-08-29 20:56:19.598862 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-08-29 20:56:19.598879 | orchestrator | Friday 29 August 2025 20:53:12 +0000 (0:00:00.818) 0:00:30.390 ********* 2025-08-29 20:56:19.598895 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.598912 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.598928 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.598945 | orchestrator | 2025-08-29 20:56:19.598961 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-08-29 20:56:19.598978 | orchestrator | Friday 29 August 2025 20:53:12 +0000 (0:00:00.321) 0:00:30.712 ********* 2025-08-29 20:56:19.598994 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.599011 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.599027 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.599044 | orchestrator | 2025-08-29 20:56:19.599060 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-08-29 20:56:19.599077 | orchestrator | Friday 29 August 2025 20:53:13 +0000 (0:00:00.603) 0:00:31.315 ********* 2025-08-29 20:56:19.599093 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.599110 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.599126 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.599143 | orchestrator | 2025-08-29 20:56:19.599159 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-08-29 20:56:19.599176 | orchestrator | Friday 29 August 2025 20:53:14 +0000 (0:00:01.209) 0:00:32.525 ********* 2025-08-29 20:56:19.599193 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:56:19.599210 | orchestrator | 2025-08-29 20:56:19.599226 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-08-29 20:56:19.599242 | orchestrator | Friday 29 August 2025 20:53:15 +0000 (0:00:00.754) 0:00:33.279 ********* 2025-08-29 20:56:19.599258 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.599284 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.599300 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.599317 | orchestrator | 2025-08-29 20:56:19.599333 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-08-29 20:56:19.599349 | orchestrator | Friday 29 August 2025 20:53:18 +0000 (0:00:02.570) 0:00:35.849 ********* 2025-08-29 20:56:19.599366 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.599383 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.599399 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.599415 | orchestrator | 2025-08-29 20:56:19.599438 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-08-29 20:56:19.599454 | orchestrator | Friday 29 August 2025 20:53:19 +0000 (0:00:01.008) 0:00:36.858 ********* 2025-08-29 20:56:19.599471 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.599488 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.599504 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.599521 | orchestrator | 2025-08-29 20:56:19.599537 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-08-29 20:56:19.599554 | orchestrator | Friday 29 August 2025 20:53:19 +0000 (0:00:00.824) 0:00:37.683 ********* 2025-08-29 20:56:19.599571 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.599587 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.599603 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.599619 | orchestrator | 2025-08-29 20:56:19.599636 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-08-29 20:56:19.599653 | orchestrator | Friday 29 August 2025 20:53:21 +0000 (0:00:01.449) 0:00:39.132 ********* 2025-08-29 20:56:19.599670 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.599686 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.599702 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.599719 | orchestrator | 2025-08-29 20:56:19.599736 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-08-29 20:56:19.599809 | orchestrator | Friday 29 August 2025 20:53:21 +0000 (0:00:00.324) 0:00:39.456 ********* 2025-08-29 20:56:19.599829 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.599845 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.599862 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.599878 | orchestrator | 2025-08-29 20:56:19.599895 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-08-29 20:56:19.599912 | orchestrator | Friday 29 August 2025 20:53:22 +0000 (0:00:00.663) 0:00:40.119 ********* 2025-08-29 20:56:19.599928 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.599945 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.599962 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.600026 | orchestrator | 2025-08-29 20:56:19.600053 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-08-29 20:56:19.600070 | orchestrator | Friday 29 August 2025 20:53:24 +0000 (0:00:02.176) 0:00:42.296 ********* 2025-08-29 20:56:19.600088 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 20:56:19.600105 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 20:56:19.600122 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 20:56:19.600138 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 20:56:19.600155 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 20:56:19.600172 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 20:56:19.600195 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 20:56:19.600209 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 20:56:19.600223 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 20:56:19.600236 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 20:56:19.600250 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 20:56:19.600263 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 20:56:19.600277 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-08-29 20:56:19.600290 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.600304 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.600318 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.600332 | orchestrator | 2025-08-29 20:56:19.600345 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-08-29 20:56:19.600359 | orchestrator | Friday 29 August 2025 20:54:19 +0000 (0:00:54.690) 0:01:36.987 ********* 2025-08-29 20:56:19.600372 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.600386 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.600399 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.600413 | orchestrator | 2025-08-29 20:56:19.600427 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-08-29 20:56:19.600440 | orchestrator | Friday 29 August 2025 20:54:19 +0000 (0:00:00.345) 0:01:37.332 ********* 2025-08-29 20:56:19.600454 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.600467 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.600481 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.600495 | orchestrator | 2025-08-29 20:56:19.600509 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-08-29 20:56:19.600523 | orchestrator | Friday 29 August 2025 20:54:20 +0000 (0:00:01.266) 0:01:38.600 ********* 2025-08-29 20:56:19.600537 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.600551 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.600564 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.600578 | orchestrator | 2025-08-29 20:56:19.600591 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-08-29 20:56:19.600605 | orchestrator | Friday 29 August 2025 20:54:22 +0000 (0:00:01.291) 0:01:39.892 ********* 2025-08-29 20:56:19.600618 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.600632 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.600645 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.600658 | orchestrator | 2025-08-29 20:56:19.600698 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-08-29 20:56:19.600712 | orchestrator | Friday 29 August 2025 20:54:45 +0000 (0:00:23.779) 0:02:03.672 ********* 2025-08-29 20:56:19.600725 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.600739 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.600769 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.600783 | orchestrator | 2025-08-29 20:56:19.600803 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-08-29 20:56:19.600816 | orchestrator | Friday 29 August 2025 20:54:46 +0000 (0:00:00.720) 0:02:04.393 ********* 2025-08-29 20:56:19.600830 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.600844 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.600858 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.600880 | orchestrator | 2025-08-29 20:56:19.600893 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-08-29 20:56:19.600906 | orchestrator | Friday 29 August 2025 20:54:47 +0000 (0:00:00.934) 0:02:05.327 ********* 2025-08-29 20:56:19.600920 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.600934 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.600947 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.600961 | orchestrator | 2025-08-29 20:56:19.600981 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-08-29 20:56:19.600995 | orchestrator | Friday 29 August 2025 20:54:48 +0000 (0:00:00.719) 0:02:06.047 ********* 2025-08-29 20:56:19.601008 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.601022 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.601035 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.601049 | orchestrator | 2025-08-29 20:56:19.601062 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-08-29 20:56:19.601076 | orchestrator | Friday 29 August 2025 20:54:48 +0000 (0:00:00.676) 0:02:06.723 ********* 2025-08-29 20:56:19.601089 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.601102 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.601115 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.601129 | orchestrator | 2025-08-29 20:56:19.601142 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-08-29 20:56:19.601155 | orchestrator | Friday 29 August 2025 20:54:49 +0000 (0:00:00.340) 0:02:07.063 ********* 2025-08-29 20:56:19.601168 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.601181 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.601195 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.601208 | orchestrator | 2025-08-29 20:56:19.601221 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-08-29 20:56:19.601234 | orchestrator | Friday 29 August 2025 20:54:50 +0000 (0:00:00.824) 0:02:07.887 ********* 2025-08-29 20:56:19.601247 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.601260 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.601274 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.601287 | orchestrator | 2025-08-29 20:56:19.601300 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-08-29 20:56:19.601314 | orchestrator | Friday 29 August 2025 20:54:50 +0000 (0:00:00.642) 0:02:08.530 ********* 2025-08-29 20:56:19.601328 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.601341 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.601354 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.601367 | orchestrator | 2025-08-29 20:56:19.601381 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-08-29 20:56:19.601394 | orchestrator | Friday 29 August 2025 20:54:51 +0000 (0:00:00.855) 0:02:09.385 ********* 2025-08-29 20:56:19.601408 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:19.601421 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:19.601435 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:19.601449 | orchestrator | 2025-08-29 20:56:19.601462 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-08-29 20:56:19.601475 | orchestrator | Friday 29 August 2025 20:54:52 +0000 (0:00:00.883) 0:02:10.268 ********* 2025-08-29 20:56:19.601489 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.601503 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.601516 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.601530 | orchestrator | 2025-08-29 20:56:19.601543 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-08-29 20:56:19.601557 | orchestrator | Friday 29 August 2025 20:54:52 +0000 (0:00:00.496) 0:02:10.765 ********* 2025-08-29 20:56:19.601570 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.601584 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.601598 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.601611 | orchestrator | 2025-08-29 20:56:19.601625 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-08-29 20:56:19.601646 | orchestrator | Friday 29 August 2025 20:54:53 +0000 (0:00:00.315) 0:02:11.080 ********* 2025-08-29 20:56:19.601659 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.601673 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.601687 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.601700 | orchestrator | 2025-08-29 20:56:19.601714 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-08-29 20:56:19.601727 | orchestrator | Friday 29 August 2025 20:54:53 +0000 (0:00:00.671) 0:02:11.751 ********* 2025-08-29 20:56:19.601741 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.601769 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.601784 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.601797 | orchestrator | 2025-08-29 20:56:19.601816 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-08-29 20:56:19.601830 | orchestrator | Friday 29 August 2025 20:54:54 +0000 (0:00:00.723) 0:02:12.475 ********* 2025-08-29 20:56:19.601892 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 20:56:19.601907 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 20:56:19.601921 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 20:56:19.601936 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 20:56:19.601950 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 20:56:19.601963 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 20:56:19.601977 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 20:56:19.601992 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 20:56:19.602006 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 20:56:19.602044 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-08-29 20:56:19.602060 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 20:56:19.602074 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 20:56:19.602095 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-08-29 20:56:19.602109 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 20:56:19.602122 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 20:56:19.602136 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 20:56:19.602149 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 20:56:19.602162 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 20:56:19.602175 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 20:56:19.602196 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 20:56:19.602217 | orchestrator | 2025-08-29 20:56:19.602240 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-08-29 20:56:19.602256 | orchestrator | 2025-08-29 20:56:19.602269 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-08-29 20:56:19.602283 | orchestrator | Friday 29 August 2025 20:54:57 +0000 (0:00:03.315) 0:02:15.790 ********* 2025-08-29 20:56:19.602296 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:56:19.602319 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:56:19.602333 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:56:19.602346 | orchestrator | 2025-08-29 20:56:19.602359 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-08-29 20:56:19.602373 | orchestrator | Friday 29 August 2025 20:54:58 +0000 (0:00:00.334) 0:02:16.125 ********* 2025-08-29 20:56:19.602387 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:56:19.602400 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:56:19.602414 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:56:19.602428 | orchestrator | 2025-08-29 20:56:19.602441 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-08-29 20:56:19.602455 | orchestrator | Friday 29 August 2025 20:54:59 +0000 (0:00:01.491) 0:02:17.616 ********* 2025-08-29 20:56:19.602469 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:56:19.602482 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:56:19.602496 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:56:19.602509 | orchestrator | 2025-08-29 20:56:19.602523 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-08-29 20:56:19.602536 | orchestrator | Friday 29 August 2025 20:55:00 +0000 (0:00:00.302) 0:02:17.919 ********* 2025-08-29 20:56:19.602550 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 20:56:19.602564 | orchestrator | 2025-08-29 20:56:19.602578 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-08-29 20:56:19.602592 | orchestrator | Friday 29 August 2025 20:55:00 +0000 (0:00:00.666) 0:02:18.586 ********* 2025-08-29 20:56:19.602605 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.602618 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.602632 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.602646 | orchestrator | 2025-08-29 20:56:19.602659 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-08-29 20:56:19.602672 | orchestrator | Friday 29 August 2025 20:55:01 +0000 (0:00:00.329) 0:02:18.915 ********* 2025-08-29 20:56:19.602686 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.602699 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.602713 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.602726 | orchestrator | 2025-08-29 20:56:19.602740 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-08-29 20:56:19.602800 | orchestrator | Friday 29 August 2025 20:55:01 +0000 (0:00:00.297) 0:02:19.212 ********* 2025-08-29 20:56:19.602815 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.602829 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.602842 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.602856 | orchestrator | 2025-08-29 20:56:19.602875 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-08-29 20:56:19.602888 | orchestrator | Friday 29 August 2025 20:55:01 +0000 (0:00:00.501) 0:02:19.714 ********* 2025-08-29 20:56:19.602900 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:56:19.602913 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:56:19.602934 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:56:19.602956 | orchestrator | 2025-08-29 20:56:19.602970 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-08-29 20:56:19.602984 | orchestrator | Friday 29 August 2025 20:55:02 +0000 (0:00:00.701) 0:02:20.416 ********* 2025-08-29 20:56:19.602997 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:56:19.603011 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:56:19.603024 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:56:19.603038 | orchestrator | 2025-08-29 20:56:19.603051 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-08-29 20:56:19.603065 | orchestrator | Friday 29 August 2025 20:55:03 +0000 (0:00:01.041) 0:02:21.457 ********* 2025-08-29 20:56:19.603078 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:56:19.603090 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:56:19.603101 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:56:19.603121 | orchestrator | 2025-08-29 20:56:19.603132 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-08-29 20:56:19.603143 | orchestrator | Friday 29 August 2025 20:55:04 +0000 (0:00:01.131) 0:02:22.588 ********* 2025-08-29 20:56:19.603155 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:56:19.603166 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:56:19.603178 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:56:19.603189 | orchestrator | 2025-08-29 20:56:19.603201 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 20:56:19.603212 | orchestrator | 2025-08-29 20:56:19.603224 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 20:56:19.603235 | orchestrator | Friday 29 August 2025 20:55:17 +0000 (0:00:13.229) 0:02:35.817 ********* 2025-08-29 20:56:19.603246 | orchestrator | ok: [testbed-manager] 2025-08-29 20:56:19.603257 | orchestrator | 2025-08-29 20:56:19.603278 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 20:56:19.603293 | orchestrator | Friday 29 August 2025 20:55:18 +0000 (0:00:00.799) 0:02:36.616 ********* 2025-08-29 20:56:19.603303 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:19.603315 | orchestrator | 2025-08-29 20:56:19.603325 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 20:56:19.603336 | orchestrator | Friday 29 August 2025 20:55:19 +0000 (0:00:00.412) 0:02:37.029 ********* 2025-08-29 20:56:19.603346 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 20:56:19.603356 | orchestrator | 2025-08-29 20:56:19.603367 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 20:56:19.603378 | orchestrator | Friday 29 August 2025 20:55:19 +0000 (0:00:00.535) 0:02:37.565 ********* 2025-08-29 20:56:19.603388 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:19.603399 | orchestrator | 2025-08-29 20:56:19.603410 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 20:56:19.603421 | orchestrator | Friday 29 August 2025 20:55:20 +0000 (0:00:00.780) 0:02:38.345 ********* 2025-08-29 20:56:19.603433 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:19.603445 | orchestrator | 2025-08-29 20:56:19.603457 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 20:56:19.603468 | orchestrator | Friday 29 August 2025 20:55:21 +0000 (0:00:01.024) 0:02:39.370 ********* 2025-08-29 20:56:19.603479 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 20:56:19.603490 | orchestrator | 2025-08-29 20:56:19.603501 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 20:56:19.603512 | orchestrator | Friday 29 August 2025 20:55:23 +0000 (0:00:01.515) 0:02:40.885 ********* 2025-08-29 20:56:19.603523 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 20:56:19.603534 | orchestrator | 2025-08-29 20:56:19.603544 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 20:56:19.603556 | orchestrator | Friday 29 August 2025 20:55:23 +0000 (0:00:00.836) 0:02:41.721 ********* 2025-08-29 20:56:19.603567 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:19.603578 | orchestrator | 2025-08-29 20:56:19.603590 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 20:56:19.603601 | orchestrator | Friday 29 August 2025 20:55:24 +0000 (0:00:00.442) 0:02:42.164 ********* 2025-08-29 20:56:19.603613 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:19.603625 | orchestrator | 2025-08-29 20:56:19.603636 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-08-29 20:56:19.603648 | orchestrator | 2025-08-29 20:56:19.603659 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-08-29 20:56:19.603672 | orchestrator | Friday 29 August 2025 20:55:24 +0000 (0:00:00.433) 0:02:42.597 ********* 2025-08-29 20:56:19.603684 | orchestrator | ok: [testbed-manager] 2025-08-29 20:56:19.603696 | orchestrator | 2025-08-29 20:56:19.603708 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-08-29 20:56:19.603730 | orchestrator | Friday 29 August 2025 20:55:24 +0000 (0:00:00.139) 0:02:42.736 ********* 2025-08-29 20:56:19.603742 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 20:56:19.603770 | orchestrator | 2025-08-29 20:56:19.603782 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-08-29 20:56:19.603794 | orchestrator | Friday 29 August 2025 20:55:25 +0000 (0:00:00.229) 0:02:42.965 ********* 2025-08-29 20:56:19.603806 | orchestrator | ok: [testbed-manager] 2025-08-29 20:56:19.603818 | orchestrator | 2025-08-29 20:56:19.603830 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-08-29 20:56:19.603841 | orchestrator | Friday 29 August 2025 20:55:25 +0000 (0:00:00.755) 0:02:43.720 ********* 2025-08-29 20:56:19.603854 | orchestrator | ok: [testbed-manager] 2025-08-29 20:56:19.603865 | orchestrator | 2025-08-29 20:56:19.603877 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-08-29 20:56:19.603899 | orchestrator | Friday 29 August 2025 20:55:27 +0000 (0:00:01.890) 0:02:45.611 ********* 2025-08-29 20:56:19.603910 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:19.603923 | orchestrator | 2025-08-29 20:56:19.603934 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-08-29 20:56:19.603946 | orchestrator | Friday 29 August 2025 20:55:28 +0000 (0:00:00.804) 0:02:46.416 ********* 2025-08-29 20:56:19.603958 | orchestrator | ok: [testbed-manager] 2025-08-29 20:56:19.603969 | orchestrator | 2025-08-29 20:56:19.603980 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-08-29 20:56:19.603991 | orchestrator | Friday 29 August 2025 20:55:29 +0000 (0:00:00.431) 0:02:46.847 ********* 2025-08-29 20:56:19.604002 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:19.604013 | orchestrator | 2025-08-29 20:56:19.604024 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-08-29 20:56:19.604035 | orchestrator | Friday 29 August 2025 20:55:36 +0000 (0:00:07.396) 0:02:54.244 ********* 2025-08-29 20:56:19.604045 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:19.604055 | orchestrator | 2025-08-29 20:56:19.604066 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-08-29 20:56:19.604077 | orchestrator | Friday 29 August 2025 20:55:48 +0000 (0:00:11.908) 0:03:06.153 ********* 2025-08-29 20:56:19.604088 | orchestrator | ok: [testbed-manager] 2025-08-29 20:56:19.604099 | orchestrator | 2025-08-29 20:56:19.604111 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-08-29 20:56:19.604129 | orchestrator | 2025-08-29 20:56:19.604141 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-08-29 20:56:19.604153 | orchestrator | Friday 29 August 2025 20:55:48 +0000 (0:00:00.479) 0:03:06.633 ********* 2025-08-29 20:56:19.604164 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.604176 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.604187 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.604199 | orchestrator | 2025-08-29 20:56:19.604210 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-08-29 20:56:19.604222 | orchestrator | Friday 29 August 2025 20:55:49 +0000 (0:00:00.276) 0:03:06.909 ********* 2025-08-29 20:56:19.604243 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604255 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.604267 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.604277 | orchestrator | 2025-08-29 20:56:19.604288 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-08-29 20:56:19.604299 | orchestrator | Friday 29 August 2025 20:55:49 +0000 (0:00:00.370) 0:03:07.280 ********* 2025-08-29 20:56:19.604310 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:56:19.604322 | orchestrator | 2025-08-29 20:56:19.604334 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-08-29 20:56:19.604345 | orchestrator | Friday 29 August 2025 20:55:49 +0000 (0:00:00.474) 0:03:07.754 ********* 2025-08-29 20:56:19.604366 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604377 | orchestrator | 2025-08-29 20:56:19.604389 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-08-29 20:56:19.604400 | orchestrator | Friday 29 August 2025 20:55:50 +0000 (0:00:00.185) 0:03:07.940 ********* 2025-08-29 20:56:19.604412 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604424 | orchestrator | 2025-08-29 20:56:19.604435 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-08-29 20:56:19.604447 | orchestrator | Friday 29 August 2025 20:55:50 +0000 (0:00:00.212) 0:03:08.153 ********* 2025-08-29 20:56:19.604458 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604470 | orchestrator | 2025-08-29 20:56:19.604481 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-08-29 20:56:19.604493 | orchestrator | Friday 29 August 2025 20:55:50 +0000 (0:00:00.241) 0:03:08.394 ********* 2025-08-29 20:56:19.604504 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604516 | orchestrator | 2025-08-29 20:56:19.604527 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-08-29 20:56:19.604539 | orchestrator | Friday 29 August 2025 20:55:51 +0000 (0:00:00.454) 0:03:08.849 ********* 2025-08-29 20:56:19.604550 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604562 | orchestrator | 2025-08-29 20:56:19.604573 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-08-29 20:56:19.604584 | orchestrator | Friday 29 August 2025 20:55:51 +0000 (0:00:00.163) 0:03:09.013 ********* 2025-08-29 20:56:19.604596 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604607 | orchestrator | 2025-08-29 20:56:19.604619 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-08-29 20:56:19.604630 | orchestrator | Friday 29 August 2025 20:55:51 +0000 (0:00:00.157) 0:03:09.170 ********* 2025-08-29 20:56:19.604642 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604653 | orchestrator | 2025-08-29 20:56:19.604664 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-08-29 20:56:19.604676 | orchestrator | Friday 29 August 2025 20:55:51 +0000 (0:00:00.155) 0:03:09.325 ********* 2025-08-29 20:56:19.604687 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604699 | orchestrator | 2025-08-29 20:56:19.604710 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-08-29 20:56:19.604722 | orchestrator | Friday 29 August 2025 20:55:51 +0000 (0:00:00.145) 0:03:09.470 ********* 2025-08-29 20:56:19.604733 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604744 | orchestrator | 2025-08-29 20:56:19.604774 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-08-29 20:56:19.604785 | orchestrator | Friday 29 August 2025 20:55:51 +0000 (0:00:00.197) 0:03:09.667 ********* 2025-08-29 20:56:19.604796 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-08-29 20:56:19.604807 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-08-29 20:56:19.604818 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604834 | orchestrator | 2025-08-29 20:56:19.604847 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-08-29 20:56:19.604859 | orchestrator | Friday 29 August 2025 20:55:52 +0000 (0:00:00.230) 0:03:09.898 ********* 2025-08-29 20:56:19.604872 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604888 | orchestrator | 2025-08-29 20:56:19.604905 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-08-29 20:56:19.604917 | orchestrator | Friday 29 August 2025 20:55:52 +0000 (0:00:00.182) 0:03:10.080 ********* 2025-08-29 20:56:19.604984 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.604997 | orchestrator | 2025-08-29 20:56:19.605009 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-08-29 20:56:19.605021 | orchestrator | Friday 29 August 2025 20:55:52 +0000 (0:00:00.153) 0:03:10.233 ********* 2025-08-29 20:56:19.605033 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605045 | orchestrator | 2025-08-29 20:56:19.605064 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-08-29 20:56:19.605076 | orchestrator | Friday 29 August 2025 20:55:52 +0000 (0:00:00.186) 0:03:10.419 ********* 2025-08-29 20:56:19.605088 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605100 | orchestrator | 2025-08-29 20:56:19.605112 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-08-29 20:56:19.605124 | orchestrator | Friday 29 August 2025 20:55:52 +0000 (0:00:00.205) 0:03:10.625 ********* 2025-08-29 20:56:19.605136 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605148 | orchestrator | 2025-08-29 20:56:19.605159 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-08-29 20:56:19.605171 | orchestrator | Friday 29 August 2025 20:55:52 +0000 (0:00:00.194) 0:03:10.820 ********* 2025-08-29 20:56:19.605186 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605201 | orchestrator | 2025-08-29 20:56:19.605212 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-08-29 20:56:19.605223 | orchestrator | Friday 29 August 2025 20:55:53 +0000 (0:00:00.526) 0:03:11.347 ********* 2025-08-29 20:56:19.605234 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605245 | orchestrator | 2025-08-29 20:56:19.605257 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-08-29 20:56:19.605268 | orchestrator | Friday 29 August 2025 20:55:53 +0000 (0:00:00.185) 0:03:11.532 ********* 2025-08-29 20:56:19.605292 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605308 | orchestrator | 2025-08-29 20:56:19.605320 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-08-29 20:56:19.605332 | orchestrator | Friday 29 August 2025 20:55:53 +0000 (0:00:00.191) 0:03:11.723 ********* 2025-08-29 20:56:19.605345 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605357 | orchestrator | 2025-08-29 20:56:19.605369 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-08-29 20:56:19.605381 | orchestrator | Friday 29 August 2025 20:55:54 +0000 (0:00:00.317) 0:03:12.041 ********* 2025-08-29 20:56:19.605392 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605405 | orchestrator | 2025-08-29 20:56:19.605415 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-08-29 20:56:19.605426 | orchestrator | Friday 29 August 2025 20:55:54 +0000 (0:00:00.207) 0:03:12.249 ********* 2025-08-29 20:56:19.605438 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605449 | orchestrator | 2025-08-29 20:56:19.605460 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-08-29 20:56:19.605471 | orchestrator | Friday 29 August 2025 20:55:54 +0000 (0:00:00.193) 0:03:12.442 ********* 2025-08-29 20:56:19.605482 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-08-29 20:56:19.605494 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-08-29 20:56:19.605507 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-08-29 20:56:19.605519 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-08-29 20:56:19.605531 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605543 | orchestrator | 2025-08-29 20:56:19.605553 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-08-29 20:56:19.605565 | orchestrator | Friday 29 August 2025 20:55:55 +0000 (0:00:00.526) 0:03:12.969 ********* 2025-08-29 20:56:19.605576 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605586 | orchestrator | 2025-08-29 20:56:19.605597 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-08-29 20:56:19.605608 | orchestrator | Friday 29 August 2025 20:55:55 +0000 (0:00:00.191) 0:03:13.160 ********* 2025-08-29 20:56:19.605621 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605633 | orchestrator | 2025-08-29 20:56:19.605650 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-08-29 20:56:19.605662 | orchestrator | Friday 29 August 2025 20:55:55 +0000 (0:00:00.187) 0:03:13.347 ********* 2025-08-29 20:56:19.605683 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605694 | orchestrator | 2025-08-29 20:56:19.605706 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-08-29 20:56:19.605718 | orchestrator | Friday 29 August 2025 20:55:55 +0000 (0:00:00.155) 0:03:13.503 ********* 2025-08-29 20:56:19.605730 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605742 | orchestrator | 2025-08-29 20:56:19.605803 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-08-29 20:56:19.605816 | orchestrator | Friday 29 August 2025 20:55:55 +0000 (0:00:00.186) 0:03:13.689 ********* 2025-08-29 20:56:19.605827 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-08-29 20:56:19.605839 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-08-29 20:56:19.605850 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605861 | orchestrator | 2025-08-29 20:56:19.605872 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-08-29 20:56:19.605884 | orchestrator | Friday 29 August 2025 20:55:56 +0000 (0:00:00.389) 0:03:14.079 ********* 2025-08-29 20:56:19.605895 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.605906 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.605917 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.605929 | orchestrator | 2025-08-29 20:56:19.605939 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-08-29 20:56:19.605954 | orchestrator | Friday 29 August 2025 20:55:56 +0000 (0:00:00.389) 0:03:14.469 ********* 2025-08-29 20:56:19.605965 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.605975 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.605985 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.605996 | orchestrator | 2025-08-29 20:56:19.606007 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-08-29 20:56:19.606167 | orchestrator | 2025-08-29 20:56:19.606184 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-08-29 20:56:19.606195 | orchestrator | Friday 29 August 2025 20:55:57 +0000 (0:00:00.915) 0:03:15.384 ********* 2025-08-29 20:56:19.606205 | orchestrator | ok: [testbed-manager] 2025-08-29 20:56:19.606216 | orchestrator | 2025-08-29 20:56:19.606226 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-08-29 20:56:19.606237 | orchestrator | Friday 29 August 2025 20:55:57 +0000 (0:00:00.117) 0:03:15.502 ********* 2025-08-29 20:56:19.606248 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 20:56:19.606258 | orchestrator | 2025-08-29 20:56:19.606269 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-08-29 20:56:19.606279 | orchestrator | Friday 29 August 2025 20:55:57 +0000 (0:00:00.294) 0:03:15.797 ********* 2025-08-29 20:56:19.606290 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:19.606301 | orchestrator | 2025-08-29 20:56:19.606311 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-08-29 20:56:19.606321 | orchestrator | 2025-08-29 20:56:19.606332 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-08-29 20:56:19.606342 | orchestrator | Friday 29 August 2025 20:56:02 +0000 (0:00:05.004) 0:03:20.802 ********* 2025-08-29 20:56:19.606353 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:56:19.606364 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:56:19.606374 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:56:19.606384 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:19.606395 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:19.606405 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:19.606416 | orchestrator | 2025-08-29 20:56:19.606435 | orchestrator | TASK [Manage labels] *********************************************************** 2025-08-29 20:56:19.606446 | orchestrator | Friday 29 August 2025 20:56:03 +0000 (0:00:00.576) 0:03:21.378 ********* 2025-08-29 20:56:19.606457 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 20:56:19.606475 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 20:56:19.606486 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 20:56:19.606496 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 20:56:19.606506 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 20:56:19.606517 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 20:56:19.606527 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 20:56:19.606538 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 20:56:19.606548 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 20:56:19.606558 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 20:56:19.606569 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 20:56:19.606579 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 20:56:19.606589 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 20:56:19.606600 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 20:56:19.606611 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 20:56:19.606621 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 20:56:19.606632 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 20:56:19.606642 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 20:56:19.606652 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 20:56:19.606663 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 20:56:19.606674 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 20:56:19.606685 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 20:56:19.606696 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 20:56:19.606706 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 20:56:19.606718 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 20:56:19.606728 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 20:56:19.606740 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 20:56:19.606764 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 20:56:19.606780 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 20:56:19.606791 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 20:56:19.606804 | orchestrator | 2025-08-29 20:56:19.606816 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-08-29 20:56:19.606828 | orchestrator | Friday 29 August 2025 20:56:15 +0000 (0:00:11.683) 0:03:33.062 ********* 2025-08-29 20:56:19.606841 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.606853 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.606866 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.606879 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.606893 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.606913 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.606927 | orchestrator | 2025-08-29 20:56:19.606940 | orchestrator | TASK [Manage taints] *********************************************************** 2025-08-29 20:56:19.606954 | orchestrator | Friday 29 August 2025 20:56:15 +0000 (0:00:00.548) 0:03:33.610 ********* 2025-08-29 20:56:19.606966 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:19.606979 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:19.606990 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:19.607001 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:19.607011 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:19.607022 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:19.607032 | orchestrator | 2025-08-29 20:56:19.607043 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:56:19.607053 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:56:19.607064 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-08-29 20:56:19.607082 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 20:56:19.607094 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 20:56:19.607106 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 20:56:19.607116 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 20:56:19.607126 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 20:56:19.607136 | orchestrator | 2025-08-29 20:56:19.607146 | orchestrator | 2025-08-29 20:56:19.607156 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:56:19.607166 | orchestrator | Friday 29 August 2025 20:56:16 +0000 (0:00:00.655) 0:03:34.266 ********* 2025-08-29 20:56:19.607177 | orchestrator | =============================================================================== 2025-08-29 20:56:19.607188 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.69s 2025-08-29 20:56:19.607200 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 23.78s 2025-08-29 20:56:19.607210 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 13.23s 2025-08-29 20:56:19.607220 | orchestrator | kubectl : Install required packages ------------------------------------ 11.91s 2025-08-29 20:56:19.607230 | orchestrator | Manage labels ---------------------------------------------------------- 11.68s 2025-08-29 20:56:19.607244 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.40s 2025-08-29 20:56:19.607259 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.89s 2025-08-29 20:56:19.607268 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.00s 2025-08-29 20:56:19.607279 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.32s 2025-08-29 20:56:19.607291 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.60s 2025-08-29 20:56:19.607302 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.57s 2025-08-29 20:56:19.607312 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.18s 2025-08-29 20:56:19.607322 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.05s 2025-08-29 20:56:19.607333 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.02s 2025-08-29 20:56:19.607352 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.89s 2025-08-29 20:56:19.607362 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.87s 2025-08-29 20:56:19.607372 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.72s 2025-08-29 20:56:19.607383 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2025-08-29 20:56:19.607394 | orchestrator | k3s_agent : Check if system is PXE-booted ------------------------------- 1.49s 2025-08-29 20:56:19.607404 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.45s 2025-08-29 20:56:19.607419 | orchestrator | 2025-08-29 20:56:19 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:19.607429 | orchestrator | 2025-08-29 20:56:19 | INFO  | Task a02e7a9d-5e9d-411b-bddd-9e24de2248f1 is in state STARTED 2025-08-29 20:56:19.607439 | orchestrator | 2025-08-29 20:56:19 | INFO  | Task 92fbfb50-4150-45b6-81e0-c66255d52817 is in state STARTED 2025-08-29 20:56:19.607448 | orchestrator | 2025-08-29 20:56:19 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:19.607458 | orchestrator | 2025-08-29 20:56:19 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:19.607468 | orchestrator | 2025-08-29 20:56:19 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:19.607478 | orchestrator | 2025-08-29 20:56:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:22.662481 | orchestrator | 2025-08-29 20:56:22 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:22.662806 | orchestrator | 2025-08-29 20:56:22 | INFO  | Task a02e7a9d-5e9d-411b-bddd-9e24de2248f1 is in state STARTED 2025-08-29 20:56:22.664024 | orchestrator | 2025-08-29 20:56:22 | INFO  | Task 92fbfb50-4150-45b6-81e0-c66255d52817 is in state STARTED 2025-08-29 20:56:22.665393 | orchestrator | 2025-08-29 20:56:22 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:22.667315 | orchestrator | 2025-08-29 20:56:22 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:22.668597 | orchestrator | 2025-08-29 20:56:22 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:22.668620 | orchestrator | 2025-08-29 20:56:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:25.747299 | orchestrator | 2025-08-29 20:56:25 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:25.747396 | orchestrator | 2025-08-29 20:56:25 | INFO  | Task a02e7a9d-5e9d-411b-bddd-9e24de2248f1 is in state SUCCESS 2025-08-29 20:56:25.748674 | orchestrator | 2025-08-29 20:56:25 | INFO  | Task 92fbfb50-4150-45b6-81e0-c66255d52817 is in state STARTED 2025-08-29 20:56:25.749241 | orchestrator | 2025-08-29 20:56:25 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:25.749842 | orchestrator | 2025-08-29 20:56:25 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:25.750426 | orchestrator | 2025-08-29 20:56:25 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:25.750451 | orchestrator | 2025-08-29 20:56:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:28.780002 | orchestrator | 2025-08-29 20:56:28 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:28.780088 | orchestrator | 2025-08-29 20:56:28 | INFO  | Task 92fbfb50-4150-45b6-81e0-c66255d52817 is in state SUCCESS 2025-08-29 20:56:28.780122 | orchestrator | 2025-08-29 20:56:28 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:28.780134 | orchestrator | 2025-08-29 20:56:28 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:28.780420 | orchestrator | 2025-08-29 20:56:28 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:28.780443 | orchestrator | 2025-08-29 20:56:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:31.807131 | orchestrator | 2025-08-29 20:56:31 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:31.807224 | orchestrator | 2025-08-29 20:56:31 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:31.807901 | orchestrator | 2025-08-29 20:56:31 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:31.808406 | orchestrator | 2025-08-29 20:56:31 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:31.808429 | orchestrator | 2025-08-29 20:56:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:34.842325 | orchestrator | 2025-08-29 20:56:34 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:34.842573 | orchestrator | 2025-08-29 20:56:34 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:34.843345 | orchestrator | 2025-08-29 20:56:34 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:34.843920 | orchestrator | 2025-08-29 20:56:34 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:34.844107 | orchestrator | 2025-08-29 20:56:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:37.872889 | orchestrator | 2025-08-29 20:56:37 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:37.874887 | orchestrator | 2025-08-29 20:56:37 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:37.878346 | orchestrator | 2025-08-29 20:56:37 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:37.880183 | orchestrator | 2025-08-29 20:56:37 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:37.880480 | orchestrator | 2025-08-29 20:56:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:40.925694 | orchestrator | 2025-08-29 20:56:40 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:40.931240 | orchestrator | 2025-08-29 20:56:40 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:40.935625 | orchestrator | 2025-08-29 20:56:40 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state STARTED 2025-08-29 20:56:40.936948 | orchestrator | 2025-08-29 20:56:40 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:40.936979 | orchestrator | 2025-08-29 20:56:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:43.966274 | orchestrator | 2025-08-29 20:56:43 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:43.966364 | orchestrator | 2025-08-29 20:56:43 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:43.969285 | orchestrator | 2025-08-29 20:56:43 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:56:43.970126 | orchestrator | 2025-08-29 20:56:43 | INFO  | Task 1681bf03-8374-4763-895c-6c1d82da6e6a is in state SUCCESS 2025-08-29 20:56:43.972063 | orchestrator | 2025-08-29 20:56:43.972119 | orchestrator | 2025-08-29 20:56:43.972131 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-08-29 20:56:43.972143 | orchestrator | 2025-08-29 20:56:43.972154 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 20:56:43.972166 | orchestrator | Friday 29 August 2025 20:56:20 +0000 (0:00:00.221) 0:00:00.221 ********* 2025-08-29 20:56:43.972177 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 20:56:43.972188 | orchestrator | 2025-08-29 20:56:43.972200 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 20:56:43.972210 | orchestrator | Friday 29 August 2025 20:56:21 +0000 (0:00:00.837) 0:00:01.058 ********* 2025-08-29 20:56:43.972222 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:43.972233 | orchestrator | 2025-08-29 20:56:43.972244 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-08-29 20:56:43.972255 | orchestrator | Friday 29 August 2025 20:56:22 +0000 (0:00:01.091) 0:00:02.150 ********* 2025-08-29 20:56:43.972266 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:43.972277 | orchestrator | 2025-08-29 20:56:43.972288 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:56:43.972299 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:56:43.972311 | orchestrator | 2025-08-29 20:56:43.972322 | orchestrator | 2025-08-29 20:56:43.972332 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:56:43.972343 | orchestrator | Friday 29 August 2025 20:56:23 +0000 (0:00:00.622) 0:00:02.772 ********* 2025-08-29 20:56:43.972354 | orchestrator | =============================================================================== 2025-08-29 20:56:43.972364 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.09s 2025-08-29 20:56:43.972375 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.84s 2025-08-29 20:56:43.972386 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.62s 2025-08-29 20:56:43.972397 | orchestrator | 2025-08-29 20:56:43.972408 | orchestrator | 2025-08-29 20:56:43.972419 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 20:56:43.972430 | orchestrator | 2025-08-29 20:56:43.972440 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 20:56:43.972451 | orchestrator | Friday 29 August 2025 20:56:20 +0000 (0:00:00.184) 0:00:00.184 ********* 2025-08-29 20:56:43.972462 | orchestrator | ok: [testbed-manager] 2025-08-29 20:56:43.972473 | orchestrator | 2025-08-29 20:56:43.972484 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 20:56:43.972494 | orchestrator | Friday 29 August 2025 20:56:21 +0000 (0:00:00.561) 0:00:00.745 ********* 2025-08-29 20:56:43.972505 | orchestrator | ok: [testbed-manager] 2025-08-29 20:56:43.972516 | orchestrator | 2025-08-29 20:56:43.972526 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 20:56:43.972537 | orchestrator | Friday 29 August 2025 20:56:21 +0000 (0:00:00.529) 0:00:01.274 ********* 2025-08-29 20:56:43.972548 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 20:56:43.972559 | orchestrator | 2025-08-29 20:56:43.972570 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 20:56:43.972580 | orchestrator | Friday 29 August 2025 20:56:22 +0000 (0:00:00.755) 0:00:02.030 ********* 2025-08-29 20:56:43.972598 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:43.972609 | orchestrator | 2025-08-29 20:56:43.972620 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 20:56:43.972631 | orchestrator | Friday 29 August 2025 20:56:23 +0000 (0:00:01.162) 0:00:03.192 ********* 2025-08-29 20:56:43.972641 | orchestrator | changed: [testbed-manager] 2025-08-29 20:56:43.972654 | orchestrator | 2025-08-29 20:56:43.972666 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 20:56:43.972678 | orchestrator | Friday 29 August 2025 20:56:24 +0000 (0:00:00.713) 0:00:03.905 ********* 2025-08-29 20:56:43.972697 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 20:56:43.972709 | orchestrator | 2025-08-29 20:56:43.972722 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 20:56:43.972757 | orchestrator | Friday 29 August 2025 20:56:25 +0000 (0:00:01.251) 0:00:05.156 ********* 2025-08-29 20:56:43.972769 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 20:56:43.972782 | orchestrator | 2025-08-29 20:56:43.972795 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 20:56:43.972806 | orchestrator | Friday 29 August 2025 20:56:26 +0000 (0:00:00.609) 0:00:05.766 ********* 2025-08-29 20:56:43.972818 | orchestrator | ok: [testbed-manager] 2025-08-29 20:56:43.972830 | orchestrator | 2025-08-29 20:56:43.972842 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 20:56:43.972855 | orchestrator | Friday 29 August 2025 20:56:26 +0000 (0:00:00.315) 0:00:06.081 ********* 2025-08-29 20:56:43.972867 | orchestrator | ok: [testbed-manager] 2025-08-29 20:56:43.972879 | orchestrator | 2025-08-29 20:56:43.972891 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:56:43.972904 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:56:43.972916 | orchestrator | 2025-08-29 20:56:43.972928 | orchestrator | 2025-08-29 20:56:43.972941 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:56:43.972953 | orchestrator | Friday 29 August 2025 20:56:26 +0000 (0:00:00.271) 0:00:06.353 ********* 2025-08-29 20:56:43.972965 | orchestrator | =============================================================================== 2025-08-29 20:56:43.972977 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.25s 2025-08-29 20:56:43.972990 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.16s 2025-08-29 20:56:43.973002 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.76s 2025-08-29 20:56:43.973024 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.71s 2025-08-29 20:56:43.973036 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.61s 2025-08-29 20:56:43.973047 | orchestrator | Get home directory of operator user ------------------------------------- 0.56s 2025-08-29 20:56:43.973058 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2025-08-29 20:56:43.973069 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.32s 2025-08-29 20:56:43.973080 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2025-08-29 20:56:43.973091 | orchestrator | 2025-08-29 20:56:43.973102 | orchestrator | 2025-08-29 20:56:43.973112 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 20:56:43.973123 | orchestrator | 2025-08-29 20:56:43.973133 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 20:56:43.973144 | orchestrator | Friday 29 August 2025 20:55:36 +0000 (0:00:00.202) 0:00:00.202 ********* 2025-08-29 20:56:43.973155 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:56:43.973166 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:56:43.973177 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:56:43.973187 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:43.973198 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:43.973208 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:43.973219 | orchestrator | 2025-08-29 20:56:43.973230 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 20:56:43.973241 | orchestrator | Friday 29 August 2025 20:55:37 +0000 (0:00:00.788) 0:00:00.990 ********* 2025-08-29 20:56:43.973252 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 20:56:43.973263 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 20:56:43.973274 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 20:56:43.973290 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 20:56:43.973301 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 20:56:43.973312 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 20:56:43.973322 | orchestrator | 2025-08-29 20:56:43.973333 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-08-29 20:56:43.973344 | orchestrator | 2025-08-29 20:56:43.973355 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-08-29 20:56:43.973366 | orchestrator | Friday 29 August 2025 20:55:38 +0000 (0:00:00.899) 0:00:01.890 ********* 2025-08-29 20:56:43.973377 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:56:43.973389 | orchestrator | 2025-08-29 20:56:43.973400 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 20:56:43.973411 | orchestrator | Friday 29 August 2025 20:55:40 +0000 (0:00:02.023) 0:00:03.913 ********* 2025-08-29 20:56:43.973422 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 20:56:43.973433 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 20:56:43.973448 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 20:56:43.973459 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 20:56:43.973470 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 20:56:43.973481 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 20:56:43.973492 | orchestrator | 2025-08-29 20:56:43.973502 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 20:56:43.973513 | orchestrator | Friday 29 August 2025 20:55:41 +0000 (0:00:01.203) 0:00:05.116 ********* 2025-08-29 20:56:43.973524 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 20:56:43.973535 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 20:56:43.973546 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 20:56:43.973557 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 20:56:43.973567 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 20:56:43.973578 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 20:56:43.973589 | orchestrator | 2025-08-29 20:56:43.973600 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 20:56:43.973610 | orchestrator | Friday 29 August 2025 20:55:43 +0000 (0:00:01.652) 0:00:06.769 ********* 2025-08-29 20:56:43.973621 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-08-29 20:56:43.973632 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:43.973643 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-08-29 20:56:43.973653 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:43.973664 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-08-29 20:56:43.973674 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-08-29 20:56:43.973685 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:43.973696 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-08-29 20:56:43.973707 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:43.973717 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:43.973728 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-08-29 20:56:43.973754 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:43.973765 | orchestrator | 2025-08-29 20:56:43.973775 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-08-29 20:56:43.973786 | orchestrator | Friday 29 August 2025 20:55:45 +0000 (0:00:02.370) 0:00:09.139 ********* 2025-08-29 20:56:43.973797 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:43.973815 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:43.973826 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:43.973842 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:43.973854 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:43.973865 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:43.973876 | orchestrator | 2025-08-29 20:56:43.973887 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-08-29 20:56:43.973898 | orchestrator | Friday 29 August 2025 20:55:46 +0000 (0:00:01.160) 0:00:10.300 ********* 2025-08-29 20:56:43.973912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.973929 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.973941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.973953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.973964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974778 | orchestrator | 2025-08-29 20:56:43.974790 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-08-29 20:56:43.974801 | orchestrator | Friday 29 August 2025 20:55:49 +0000 (0:00:02.768) 0:00:13.068 ********* 2025-08-29 20:56:43.974813 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974836 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.974987 | orchestrator | 2025-08-29 20:56:43.974999 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-08-29 20:56:43.975010 | orchestrator | Friday 29 August 2025 20:55:53 +0000 (0:00:04.206) 0:00:17.275 ********* 2025-08-29 20:56:43.975021 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:43.975033 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:43.975044 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:43.975055 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:56:43.975066 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:56:43.975077 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:56:43.975088 | orchestrator | 2025-08-29 20:56:43.975099 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-08-29 20:56:43.975110 | orchestrator | Friday 29 August 2025 20:55:55 +0000 (0:00:01.573) 0:00:18.849 ********* 2025-08-29 20:56:43.975122 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975134 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975155 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975227 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 20:56:43.975299 | orchestrator | 2025-08-29 20:56:43.975316 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 20:56:43.975328 | orchestrator | Friday 29 August 2025 20:55:57 +0000 (0:00:02.569) 0:00:21.419 ********* 2025-08-29 20:56:43.975339 | orchestrator | 2025-08-29 20:56:43.975350 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 20:56:43.975361 | orchestrator | Friday 29 August 2025 20:55:57 +0000 (0:00:00.129) 0:00:21.548 ********* 2025-08-29 20:56:43.975372 | orchestrator | 2025-08-29 20:56:43.975383 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 20:56:43.975394 | orchestrator | Friday 29 August 2025 20:55:57 +0000 (0:00:00.126) 0:00:21.675 ********* 2025-08-29 20:56:43.975404 | orchestrator | 2025-08-29 20:56:43.975415 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 20:56:43.975426 | orchestrator | Friday 29 August 2025 20:55:58 +0000 (0:00:00.193) 0:00:21.869 ********* 2025-08-29 20:56:43.975437 | orchestrator | 2025-08-29 20:56:43.975448 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 20:56:43.975459 | orchestrator | Friday 29 August 2025 20:55:58 +0000 (0:00:00.250) 0:00:22.119 ********* 2025-08-29 20:56:43.975470 | orchestrator | 2025-08-29 20:56:43.975480 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 20:56:43.975491 | orchestrator | Friday 29 August 2025 20:55:58 +0000 (0:00:00.164) 0:00:22.283 ********* 2025-08-29 20:56:43.975502 | orchestrator | 2025-08-29 20:56:43.975513 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-08-29 20:56:43.975524 | orchestrator | Friday 29 August 2025 20:55:58 +0000 (0:00:00.398) 0:00:22.682 ********* 2025-08-29 20:56:43.975535 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:43.975546 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:56:43.975557 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:56:43.975567 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:43.975578 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:43.975589 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:56:43.975600 | orchestrator | 2025-08-29 20:56:43.975611 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-08-29 20:56:43.975622 | orchestrator | Friday 29 August 2025 20:56:09 +0000 (0:00:10.450) 0:00:33.132 ********* 2025-08-29 20:56:43.975633 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:56:43.975644 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:56:43.975655 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:56:43.975666 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:56:43.975681 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:56:43.975692 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:56:43.975703 | orchestrator | 2025-08-29 20:56:43.975714 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 20:56:43.975744 | orchestrator | Friday 29 August 2025 20:56:11 +0000 (0:00:01.734) 0:00:34.866 ********* 2025-08-29 20:56:43.975757 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:43.975768 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:56:43.975779 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:56:43.975790 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:56:43.975800 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:43.975811 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:43.975822 | orchestrator | 2025-08-29 20:56:43.975833 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-08-29 20:56:43.975844 | orchestrator | Friday 29 August 2025 20:56:20 +0000 (0:00:09.086) 0:00:43.953 ********* 2025-08-29 20:56:43.975855 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-08-29 20:56:43.975866 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-08-29 20:56:43.975877 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-08-29 20:56:43.975888 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-08-29 20:56:43.975905 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-08-29 20:56:43.975916 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-08-29 20:56:43.975927 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-08-29 20:56:43.975938 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-08-29 20:56:43.975949 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-08-29 20:56:43.975959 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-08-29 20:56:43.975970 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-08-29 20:56:43.975981 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-08-29 20:56:43.975992 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 20:56:43.976003 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 20:56:43.976014 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 20:56:43.976024 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 20:56:43.976035 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 20:56:43.976046 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 20:56:43.976057 | orchestrator | 2025-08-29 20:56:43.976068 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-08-29 20:56:43.976079 | orchestrator | Friday 29 August 2025 20:56:28 +0000 (0:00:08.069) 0:00:52.023 ********* 2025-08-29 20:56:43.976090 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-08-29 20:56:43.976101 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:43.976112 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-08-29 20:56:43.976123 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:43.976134 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-08-29 20:56:43.976145 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:43.976156 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-08-29 20:56:43.976167 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-08-29 20:56:43.976177 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-08-29 20:56:43.976188 | orchestrator | 2025-08-29 20:56:43.976199 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-08-29 20:56:43.976210 | orchestrator | Friday 29 August 2025 20:56:31 +0000 (0:00:02.739) 0:00:54.762 ********* 2025-08-29 20:56:43.976221 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-08-29 20:56:43.976232 | orchestrator | skipping: [testbed-node-3] 2025-08-29 20:56:43.976243 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-08-29 20:56:43.976254 | orchestrator | skipping: [testbed-node-4] 2025-08-29 20:56:43.976265 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-08-29 20:56:43.976276 | orchestrator | skipping: [testbed-node-5] 2025-08-29 20:56:43.976287 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-08-29 20:56:43.976298 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-08-29 20:56:43.976314 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-08-29 20:56:43.976325 | orchestrator | 2025-08-29 20:56:43.976340 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 20:56:43.976351 | orchestrator | Friday 29 August 2025 20:56:34 +0000 (0:00:03.679) 0:00:58.442 ********* 2025-08-29 20:56:43.976362 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:56:43.976373 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:56:43.976391 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:56:43.976402 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:56:43.976413 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:56:43.976424 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:56:43.976435 | orchestrator | 2025-08-29 20:56:43.976446 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:56:43.976457 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 20:56:43.976468 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 20:56:43.976480 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 20:56:43.976491 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 20:56:43.976502 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 20:56:43.976513 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 20:56:43.976524 | orchestrator | 2025-08-29 20:56:43.976535 | orchestrator | 2025-08-29 20:56:43.976546 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:56:43.976557 | orchestrator | Friday 29 August 2025 20:56:42 +0000 (0:00:07.878) 0:01:06.320 ********* 2025-08-29 20:56:43.976568 | orchestrator | =============================================================================== 2025-08-29 20:56:43.976579 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 16.97s 2025-08-29 20:56:43.976590 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.45s 2025-08-29 20:56:43.976600 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.07s 2025-08-29 20:56:43.976611 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.21s 2025-08-29 20:56:43.976622 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.68s 2025-08-29 20:56:43.976633 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.77s 2025-08-29 20:56:43.976644 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.74s 2025-08-29 20:56:43.976655 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.57s 2025-08-29 20:56:43.976666 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.37s 2025-08-29 20:56:43.976677 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.02s 2025-08-29 20:56:43.976688 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.73s 2025-08-29 20:56:43.976699 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.65s 2025-08-29 20:56:43.976710 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.57s 2025-08-29 20:56:43.976720 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.26s 2025-08-29 20:56:43.976754 | orchestrator | module-load : Load modules ---------------------------------------------- 1.20s 2025-08-29 20:56:43.976766 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.16s 2025-08-29 20:56:43.976839 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2025-08-29 20:56:43.976851 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.79s 2025-08-29 20:56:43.976862 | orchestrator | 2025-08-29 20:56:43 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:43.976873 | orchestrator | 2025-08-29 20:56:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:47.013180 | orchestrator | 2025-08-29 20:56:47 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:47.013467 | orchestrator | 2025-08-29 20:56:47 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:47.014515 | orchestrator | 2025-08-29 20:56:47 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:56:47.015577 | orchestrator | 2025-08-29 20:56:47 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:47.015607 | orchestrator | 2025-08-29 20:56:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:50.047623 | orchestrator | 2025-08-29 20:56:50 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:50.049622 | orchestrator | 2025-08-29 20:56:50 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:50.050461 | orchestrator | 2025-08-29 20:56:50 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:56:50.052011 | orchestrator | 2025-08-29 20:56:50 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:50.052051 | orchestrator | 2025-08-29 20:56:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:53.082298 | orchestrator | 2025-08-29 20:56:53 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:53.082525 | orchestrator | 2025-08-29 20:56:53 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:53.083249 | orchestrator | 2025-08-29 20:56:53 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:56:53.085469 | orchestrator | 2025-08-29 20:56:53 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:53.085549 | orchestrator | 2025-08-29 20:56:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:56.129013 | orchestrator | 2025-08-29 20:56:56 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:56.129968 | orchestrator | 2025-08-29 20:56:56 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:56.131229 | orchestrator | 2025-08-29 20:56:56 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:56:56.132437 | orchestrator | 2025-08-29 20:56:56 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:56.132538 | orchestrator | 2025-08-29 20:56:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:56:59.173994 | orchestrator | 2025-08-29 20:56:59 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:56:59.174103 | orchestrator | 2025-08-29 20:56:59 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:56:59.177654 | orchestrator | 2025-08-29 20:56:59 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:56:59.177682 | orchestrator | 2025-08-29 20:56:59 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:56:59.177693 | orchestrator | 2025-08-29 20:56:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:02.212292 | orchestrator | 2025-08-29 20:57:02 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:02.212786 | orchestrator | 2025-08-29 20:57:02 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:02.213623 | orchestrator | 2025-08-29 20:57:02 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:02.214556 | orchestrator | 2025-08-29 20:57:02 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:02.214579 | orchestrator | 2025-08-29 20:57:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:05.255942 | orchestrator | 2025-08-29 20:57:05 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:05.256613 | orchestrator | 2025-08-29 20:57:05 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:05.258910 | orchestrator | 2025-08-29 20:57:05 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:05.260380 | orchestrator | 2025-08-29 20:57:05 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:05.260413 | orchestrator | 2025-08-29 20:57:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:08.289246 | orchestrator | 2025-08-29 20:57:08 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:08.289750 | orchestrator | 2025-08-29 20:57:08 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:08.291309 | orchestrator | 2025-08-29 20:57:08 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:08.292492 | orchestrator | 2025-08-29 20:57:08 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:08.292526 | orchestrator | 2025-08-29 20:57:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:11.333974 | orchestrator | 2025-08-29 20:57:11 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:11.335148 | orchestrator | 2025-08-29 20:57:11 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:11.337982 | orchestrator | 2025-08-29 20:57:11 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:11.340039 | orchestrator | 2025-08-29 20:57:11 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:11.340206 | orchestrator | 2025-08-29 20:57:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:14.366176 | orchestrator | 2025-08-29 20:57:14 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:14.367511 | orchestrator | 2025-08-29 20:57:14 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:14.369148 | orchestrator | 2025-08-29 20:57:14 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:14.372130 | orchestrator | 2025-08-29 20:57:14 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:14.372523 | orchestrator | 2025-08-29 20:57:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:17.407191 | orchestrator | 2025-08-29 20:57:17 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:17.407280 | orchestrator | 2025-08-29 20:57:17 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:17.407679 | orchestrator | 2025-08-29 20:57:17 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:17.408673 | orchestrator | 2025-08-29 20:57:17 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:17.408727 | orchestrator | 2025-08-29 20:57:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:20.449582 | orchestrator | 2025-08-29 20:57:20 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:20.451236 | orchestrator | 2025-08-29 20:57:20 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:20.452029 | orchestrator | 2025-08-29 20:57:20 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:20.453505 | orchestrator | 2025-08-29 20:57:20 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:20.453598 | orchestrator | 2025-08-29 20:57:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:23.482180 | orchestrator | 2025-08-29 20:57:23 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:23.486145 | orchestrator | 2025-08-29 20:57:23 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:23.487972 | orchestrator | 2025-08-29 20:57:23 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:23.492508 | orchestrator | 2025-08-29 20:57:23 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:23.492540 | orchestrator | 2025-08-29 20:57:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:26.519732 | orchestrator | 2025-08-29 20:57:26 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:26.520857 | orchestrator | 2025-08-29 20:57:26 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:26.522389 | orchestrator | 2025-08-29 20:57:26 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:26.523357 | orchestrator | 2025-08-29 20:57:26 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:26.523389 | orchestrator | 2025-08-29 20:57:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:29.566465 | orchestrator | 2025-08-29 20:57:29 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:29.569902 | orchestrator | 2025-08-29 20:57:29 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:29.572872 | orchestrator | 2025-08-29 20:57:29 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:29.575587 | orchestrator | 2025-08-29 20:57:29 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:29.576518 | orchestrator | 2025-08-29 20:57:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:32.621995 | orchestrator | 2025-08-29 20:57:32 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:32.623902 | orchestrator | 2025-08-29 20:57:32 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:32.626154 | orchestrator | 2025-08-29 20:57:32 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:32.628347 | orchestrator | 2025-08-29 20:57:32 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:32.628379 | orchestrator | 2025-08-29 20:57:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:35.666225 | orchestrator | 2025-08-29 20:57:35 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:35.669594 | orchestrator | 2025-08-29 20:57:35 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:35.672505 | orchestrator | 2025-08-29 20:57:35 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:35.675300 | orchestrator | 2025-08-29 20:57:35 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:35.676087 | orchestrator | 2025-08-29 20:57:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:38.722926 | orchestrator | 2025-08-29 20:57:38 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:38.724817 | orchestrator | 2025-08-29 20:57:38 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:38.726494 | orchestrator | 2025-08-29 20:57:38 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:38.728136 | orchestrator | 2025-08-29 20:57:38 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:38.728550 | orchestrator | 2025-08-29 20:57:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:41.764161 | orchestrator | 2025-08-29 20:57:41 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:41.767011 | orchestrator | 2025-08-29 20:57:41 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:41.769593 | orchestrator | 2025-08-29 20:57:41 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:41.773079 | orchestrator | 2025-08-29 20:57:41 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:41.773496 | orchestrator | 2025-08-29 20:57:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:44.819742 | orchestrator | 2025-08-29 20:57:44 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:44.822923 | orchestrator | 2025-08-29 20:57:44 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:44.825216 | orchestrator | 2025-08-29 20:57:44 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:44.828163 | orchestrator | 2025-08-29 20:57:44 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:44.828289 | orchestrator | 2025-08-29 20:57:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:47.862073 | orchestrator | 2025-08-29 20:57:47 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:47.862819 | orchestrator | 2025-08-29 20:57:47 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:47.865897 | orchestrator | 2025-08-29 20:57:47 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:47.865932 | orchestrator | 2025-08-29 20:57:47 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:47.865944 | orchestrator | 2025-08-29 20:57:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:50.908941 | orchestrator | 2025-08-29 20:57:50 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:50.911448 | orchestrator | 2025-08-29 20:57:50 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:50.914351 | orchestrator | 2025-08-29 20:57:50 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:50.918387 | orchestrator | 2025-08-29 20:57:50 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:50.918414 | orchestrator | 2025-08-29 20:57:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:53.960050 | orchestrator | 2025-08-29 20:57:53 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:53.961551 | orchestrator | 2025-08-29 20:57:53 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:53.962314 | orchestrator | 2025-08-29 20:57:53 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:53.963617 | orchestrator | 2025-08-29 20:57:53 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:53.963650 | orchestrator | 2025-08-29 20:57:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:57:57.004422 | orchestrator | 2025-08-29 20:57:57 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:57:57.007711 | orchestrator | 2025-08-29 20:57:57 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:57:57.010699 | orchestrator | 2025-08-29 20:57:57 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:57:57.012893 | orchestrator | 2025-08-29 20:57:57 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:57:57.013603 | orchestrator | 2025-08-29 20:57:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:00.050495 | orchestrator | 2025-08-29 20:58:00 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:00.052103 | orchestrator | 2025-08-29 20:58:00 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:00.054482 | orchestrator | 2025-08-29 20:58:00 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:00.056160 | orchestrator | 2025-08-29 20:58:00 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:58:00.056370 | orchestrator | 2025-08-29 20:58:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:03.087291 | orchestrator | 2025-08-29 20:58:03 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:03.087847 | orchestrator | 2025-08-29 20:58:03 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:03.089686 | orchestrator | 2025-08-29 20:58:03 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:03.090744 | orchestrator | 2025-08-29 20:58:03 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:58:03.091034 | orchestrator | 2025-08-29 20:58:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:06.122633 | orchestrator | 2025-08-29 20:58:06 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:06.124693 | orchestrator | 2025-08-29 20:58:06 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:06.126192 | orchestrator | 2025-08-29 20:58:06 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:06.127717 | orchestrator | 2025-08-29 20:58:06 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:58:06.129628 | orchestrator | 2025-08-29 20:58:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:09.164868 | orchestrator | 2025-08-29 20:58:09 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:09.166644 | orchestrator | 2025-08-29 20:58:09 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:09.168313 | orchestrator | 2025-08-29 20:58:09 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:09.171175 | orchestrator | 2025-08-29 20:58:09 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:58:09.171218 | orchestrator | 2025-08-29 20:58:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:12.205120 | orchestrator | 2025-08-29 20:58:12 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:12.205537 | orchestrator | 2025-08-29 20:58:12 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:12.206913 | orchestrator | 2025-08-29 20:58:12 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:12.208017 | orchestrator | 2025-08-29 20:58:12 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:58:12.208050 | orchestrator | 2025-08-29 20:58:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:15.241407 | orchestrator | 2025-08-29 20:58:15 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:15.241910 | orchestrator | 2025-08-29 20:58:15 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:15.242897 | orchestrator | 2025-08-29 20:58:15 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:15.245009 | orchestrator | 2025-08-29 20:58:15 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state STARTED 2025-08-29 20:58:15.245054 | orchestrator | 2025-08-29 20:58:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:18.284699 | orchestrator | 2025-08-29 20:58:18 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:18.285670 | orchestrator | 2025-08-29 20:58:18 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:18.286767 | orchestrator | 2025-08-29 20:58:18 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:18.287910 | orchestrator | 2025-08-29 20:58:18 | INFO  | Task 066f58b4-2766-46fa-b6fd-e805d7ad1e94 is in state SUCCESS 2025-08-29 20:58:18.287932 | orchestrator | 2025-08-29 20:58:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:18.288919 | orchestrator | 2025-08-29 20:58:18.288949 | orchestrator | 2025-08-29 20:58:18.288962 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-08-29 20:58:18.288974 | orchestrator | 2025-08-29 20:58:18.288986 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 20:58:18.288998 | orchestrator | Friday 29 August 2025 20:55:53 +0000 (0:00:00.113) 0:00:00.113 ********* 2025-08-29 20:58:18.289011 | orchestrator | ok: [localhost] => { 2025-08-29 20:58:18.289025 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-08-29 20:58:18.289038 | orchestrator | } 2025-08-29 20:58:18.289050 | orchestrator | 2025-08-29 20:58:18.289061 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-08-29 20:58:18.289073 | orchestrator | Friday 29 August 2025 20:55:53 +0000 (0:00:00.069) 0:00:00.182 ********* 2025-08-29 20:58:18.289086 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-08-29 20:58:18.289099 | orchestrator | ...ignoring 2025-08-29 20:58:18.289112 | orchestrator | 2025-08-29 20:58:18.289123 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-08-29 20:58:18.289135 | orchestrator | Friday 29 August 2025 20:55:58 +0000 (0:00:04.650) 0:00:04.833 ********* 2025-08-29 20:58:18.289147 | orchestrator | skipping: [localhost] 2025-08-29 20:58:18.289158 | orchestrator | 2025-08-29 20:58:18.289170 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-08-29 20:58:18.289181 | orchestrator | Friday 29 August 2025 20:55:58 +0000 (0:00:00.046) 0:00:04.879 ********* 2025-08-29 20:58:18.289218 | orchestrator | ok: [localhost] 2025-08-29 20:58:18.289231 | orchestrator | 2025-08-29 20:58:18.289242 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 20:58:18.289253 | orchestrator | 2025-08-29 20:58:18.289265 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 20:58:18.289276 | orchestrator | Friday 29 August 2025 20:55:58 +0000 (0:00:00.144) 0:00:05.023 ********* 2025-08-29 20:58:18.289288 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:58:18.289299 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:58:18.289310 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:58:18.289322 | orchestrator | 2025-08-29 20:58:18.289333 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 20:58:18.289345 | orchestrator | Friday 29 August 2025 20:55:58 +0000 (0:00:00.323) 0:00:05.347 ********* 2025-08-29 20:58:18.289356 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-08-29 20:58:18.289368 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-08-29 20:58:18.289379 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-08-29 20:58:18.289391 | orchestrator | 2025-08-29 20:58:18.289403 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-08-29 20:58:18.289414 | orchestrator | 2025-08-29 20:58:18.289426 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 20:58:18.289437 | orchestrator | Friday 29 August 2025 20:55:59 +0000 (0:00:00.772) 0:00:06.119 ********* 2025-08-29 20:58:18.289449 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:58:18.289460 | orchestrator | 2025-08-29 20:58:18.289472 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 20:58:18.289483 | orchestrator | Friday 29 August 2025 20:56:00 +0000 (0:00:00.922) 0:00:07.042 ********* 2025-08-29 20:58:18.289494 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:58:18.289506 | orchestrator | 2025-08-29 20:58:18.289517 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-08-29 20:58:18.289530 | orchestrator | Friday 29 August 2025 20:56:01 +0000 (0:00:00.945) 0:00:07.988 ********* 2025-08-29 20:58:18.289543 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:58:18.289556 | orchestrator | 2025-08-29 20:58:18.289692 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-08-29 20:58:18.289721 | orchestrator | Friday 29 August 2025 20:56:01 +0000 (0:00:00.367) 0:00:08.355 ********* 2025-08-29 20:58:18.289734 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:58:18.289747 | orchestrator | 2025-08-29 20:58:18.289758 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-08-29 20:58:18.289769 | orchestrator | Friday 29 August 2025 20:56:02 +0000 (0:00:00.381) 0:00:08.737 ********* 2025-08-29 20:58:18.289780 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:58:18.289790 | orchestrator | 2025-08-29 20:58:18.289801 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-08-29 20:58:18.289811 | orchestrator | Friday 29 August 2025 20:56:02 +0000 (0:00:00.497) 0:00:09.234 ********* 2025-08-29 20:58:18.289822 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:58:18.289833 | orchestrator | 2025-08-29 20:58:18.289844 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 20:58:18.289859 | orchestrator | Friday 29 August 2025 20:56:03 +0000 (0:00:00.570) 0:00:09.805 ********* 2025-08-29 20:58:18.289884 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:58:18.289904 | orchestrator | 2025-08-29 20:58:18.289922 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 20:58:18.289940 | orchestrator | Friday 29 August 2025 20:56:03 +0000 (0:00:00.839) 0:00:10.645 ********* 2025-08-29 20:58:18.289958 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:58:18.289970 | orchestrator | 2025-08-29 20:58:18.289981 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-08-29 20:58:18.290004 | orchestrator | Friday 29 August 2025 20:56:04 +0000 (0:00:00.706) 0:00:11.351 ********* 2025-08-29 20:58:18.290014 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:58:18.290107 | orchestrator | 2025-08-29 20:58:18.290118 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-08-29 20:58:18.290129 | orchestrator | Friday 29 August 2025 20:56:05 +0000 (0:00:00.391) 0:00:11.743 ********* 2025-08-29 20:58:18.290140 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:58:18.290151 | orchestrator | 2025-08-29 20:58:18.290177 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-08-29 20:58:18.290189 | orchestrator | Friday 29 August 2025 20:56:05 +0000 (0:00:00.290) 0:00:12.033 ********* 2025-08-29 20:58:18.290205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 20:58:18.290222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 20:58:18.290235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 20:58:18.290257 | orchestrator | 2025-08-29 20:58:18.290274 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-08-29 20:58:18.290286 | orchestrator | Friday 29 August 2025 20:56:06 +0000 (0:00:01.643) 0:00:13.676 ********* 2025-08-29 20:58:18.290308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 20:58:18.290322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 20:58:18.290334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 20:58:18.290345 | orchestrator | 2025-08-29 20:58:18.290357 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-08-29 20:58:18.290367 | orchestrator | Friday 29 August 2025 20:56:09 +0000 (0:00:02.047) 0:00:15.724 ********* 2025-08-29 20:58:18.290378 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 20:58:18.290390 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 20:58:18.290407 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 20:58:18.290418 | orchestrator | 2025-08-29 20:58:18.290428 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-08-29 20:58:18.290439 | orchestrator | Friday 29 August 2025 20:56:11 +0000 (0:00:02.817) 0:00:18.541 ********* 2025-08-29 20:58:18.290450 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 20:58:18.290466 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 20:58:18.290476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 20:58:18.290487 | orchestrator | 2025-08-29 20:58:18.290498 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-08-29 20:58:18.290509 | orchestrator | Friday 29 August 2025 20:56:15 +0000 (0:00:03.318) 0:00:21.859 ********* 2025-08-29 20:58:18.290520 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 20:58:18.290530 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 20:58:18.290541 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 20:58:18.290552 | orchestrator | 2025-08-29 20:58:18.290569 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-08-29 20:58:18.290580 | orchestrator | Friday 29 August 2025 20:56:16 +0000 (0:00:01.582) 0:00:23.442 ********* 2025-08-29 20:58:18.290591 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 20:58:18.290602 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 20:58:18.290612 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 20:58:18.290623 | orchestrator | 2025-08-29 20:58:18.290634 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-08-29 20:58:18.290668 | orchestrator | Friday 29 August 2025 20:56:18 +0000 (0:00:02.151) 0:00:25.593 ********* 2025-08-29 20:58:18.290679 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 20:58:18.290690 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 20:58:18.290701 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 20:58:18.290711 | orchestrator | 2025-08-29 20:58:18.290722 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-08-29 20:58:18.290733 | orchestrator | Friday 29 August 2025 20:56:20 +0000 (0:00:02.075) 0:00:27.669 ********* 2025-08-29 20:58:18.290744 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 20:58:18.290754 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 20:58:18.290766 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 20:58:18.290776 | orchestrator | 2025-08-29 20:58:18.290787 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 20:58:18.290798 | orchestrator | Friday 29 August 2025 20:56:23 +0000 (0:00:02.073) 0:00:29.742 ********* 2025-08-29 20:58:18.290808 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:58:18.290819 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:58:18.290830 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:58:18.290840 | orchestrator | 2025-08-29 20:58:18.290851 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-08-29 20:58:18.290862 | orchestrator | Friday 29 August 2025 20:56:24 +0000 (0:00:01.027) 0:00:30.770 ********* 2025-08-29 20:58:18.290874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 20:58:18.290899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 20:58:18.290921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 20:58:18.290934 | orchestrator | 2025-08-29 20:58:18.290945 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-08-29 20:58:18.290956 | orchestrator | Friday 29 August 2025 20:56:25 +0000 (0:00:01.469) 0:00:32.240 ********* 2025-08-29 20:58:18.290967 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:58:18.290978 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:58:18.290988 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:58:18.290999 | orchestrator | 2025-08-29 20:58:18.291010 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-08-29 20:58:18.291021 | orchestrator | Friday 29 August 2025 20:56:26 +0000 (0:00:00.850) 0:00:33.091 ********* 2025-08-29 20:58:18.291031 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:58:18.291042 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:58:18.291053 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:58:18.291071 | orchestrator | 2025-08-29 20:58:18.291082 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-08-29 20:58:18.291092 | orchestrator | Friday 29 August 2025 20:56:33 +0000 (0:00:07.200) 0:00:40.292 ********* 2025-08-29 20:58:18.291103 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:58:18.291114 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:58:18.291124 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:58:18.291135 | orchestrator | 2025-08-29 20:58:18.291146 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 20:58:18.291157 | orchestrator | 2025-08-29 20:58:18.291167 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 20:58:18.291178 | orchestrator | Friday 29 August 2025 20:56:33 +0000 (0:00:00.364) 0:00:40.656 ********* 2025-08-29 20:58:18.291189 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:58:18.291200 | orchestrator | 2025-08-29 20:58:18.291211 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 20:58:18.291222 | orchestrator | Friday 29 August 2025 20:56:34 +0000 (0:00:00.596) 0:00:41.252 ********* 2025-08-29 20:58:18.291232 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:58:18.291243 | orchestrator | 2025-08-29 20:58:18.291254 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 20:58:18.291265 | orchestrator | Friday 29 August 2025 20:56:34 +0000 (0:00:00.272) 0:00:41.525 ********* 2025-08-29 20:58:18.291275 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:58:18.291286 | orchestrator | 2025-08-29 20:58:18.291297 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 20:58:18.291308 | orchestrator | Friday 29 August 2025 20:56:36 +0000 (0:00:02.063) 0:00:43.588 ********* 2025-08-29 20:58:18.291318 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:58:18.291329 | orchestrator | 2025-08-29 20:58:18.291340 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 20:58:18.291351 | orchestrator | 2025-08-29 20:58:18.291362 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 20:58:18.291372 | orchestrator | Friday 29 August 2025 20:57:33 +0000 (0:00:56.905) 0:01:40.494 ********* 2025-08-29 20:58:18.291383 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:58:18.291394 | orchestrator | 2025-08-29 20:58:18.291405 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 20:58:18.291416 | orchestrator | Friday 29 August 2025 20:57:34 +0000 (0:00:00.688) 0:01:41.182 ********* 2025-08-29 20:58:18.291427 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:58:18.291437 | orchestrator | 2025-08-29 20:58:18.291448 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 20:58:18.291459 | orchestrator | Friday 29 August 2025 20:57:34 +0000 (0:00:00.406) 0:01:41.589 ********* 2025-08-29 20:58:18.291469 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:58:18.291480 | orchestrator | 2025-08-29 20:58:18.291491 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 20:58:18.291501 | orchestrator | Friday 29 August 2025 20:57:36 +0000 (0:00:01.777) 0:01:43.366 ********* 2025-08-29 20:58:18.291516 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:58:18.291527 | orchestrator | 2025-08-29 20:58:18.291538 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 20:58:18.291549 | orchestrator | 2025-08-29 20:58:18.291559 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 20:58:18.291570 | orchestrator | Friday 29 August 2025 20:57:52 +0000 (0:00:15.913) 0:01:59.280 ********* 2025-08-29 20:58:18.291581 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:58:18.291592 | orchestrator | 2025-08-29 20:58:18.291602 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 20:58:18.291613 | orchestrator | Friday 29 August 2025 20:57:53 +0000 (0:00:00.568) 0:01:59.849 ********* 2025-08-29 20:58:18.291624 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:58:18.291635 | orchestrator | 2025-08-29 20:58:18.291689 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 20:58:18.291716 | orchestrator | Friday 29 August 2025 20:57:53 +0000 (0:00:00.221) 0:02:00.071 ********* 2025-08-29 20:58:18.291728 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:58:18.291739 | orchestrator | 2025-08-29 20:58:18.291750 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 20:58:18.291761 | orchestrator | Friday 29 August 2025 20:57:55 +0000 (0:00:01.722) 0:02:01.793 ********* 2025-08-29 20:58:18.291772 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:58:18.291783 | orchestrator | 2025-08-29 20:58:18.291794 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-08-29 20:58:18.291805 | orchestrator | 2025-08-29 20:58:18.291815 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-08-29 20:58:18.291826 | orchestrator | Friday 29 August 2025 20:58:12 +0000 (0:00:17.006) 0:02:18.799 ********* 2025-08-29 20:58:18.291837 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:58:18.291848 | orchestrator | 2025-08-29 20:58:18.291859 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-08-29 20:58:18.291870 | orchestrator | Friday 29 August 2025 20:58:12 +0000 (0:00:00.627) 0:02:19.426 ********* 2025-08-29 20:58:18.291881 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 20:58:18.291892 | orchestrator | enable_outward_rabbitmq_True 2025-08-29 20:58:18.291903 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 20:58:18.291914 | orchestrator | outward_rabbitmq_restart 2025-08-29 20:58:18.291925 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:58:18.291936 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:58:18.291947 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:58:18.291957 | orchestrator | 2025-08-29 20:58:18.291968 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-08-29 20:58:18.291979 | orchestrator | skipping: no hosts matched 2025-08-29 20:58:18.291990 | orchestrator | 2025-08-29 20:58:18.292001 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-08-29 20:58:18.292012 | orchestrator | skipping: no hosts matched 2025-08-29 20:58:18.292023 | orchestrator | 2025-08-29 20:58:18.292034 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-08-29 20:58:18.292045 | orchestrator | skipping: no hosts matched 2025-08-29 20:58:18.292055 | orchestrator | 2025-08-29 20:58:18.292066 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:58:18.292078 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 20:58:18.292089 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 20:58:18.292100 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:58:18.292111 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 20:58:18.292122 | orchestrator | 2025-08-29 20:58:18.292133 | orchestrator | 2025-08-29 20:58:18.292144 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:58:18.292155 | orchestrator | Friday 29 August 2025 20:58:15 +0000 (0:00:02.840) 0:02:22.267 ********* 2025-08-29 20:58:18.292166 | orchestrator | =============================================================================== 2025-08-29 20:58:18.292176 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 89.83s 2025-08-29 20:58:18.292187 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.20s 2025-08-29 20:58:18.292198 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.56s 2025-08-29 20:58:18.292209 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.65s 2025-08-29 20:58:18.292226 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.32s 2025-08-29 20:58:18.292237 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.84s 2025-08-29 20:58:18.292248 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.82s 2025-08-29 20:58:18.292259 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.15s 2025-08-29 20:58:18.292270 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.08s 2025-08-29 20:58:18.292281 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.07s 2025-08-29 20:58:18.292292 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.05s 2025-08-29 20:58:18.292303 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.85s 2025-08-29 20:58:18.292314 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.64s 2025-08-29 20:58:18.292330 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.58s 2025-08-29 20:58:18.292341 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.47s 2025-08-29 20:58:18.292352 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.03s 2025-08-29 20:58:18.292363 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.95s 2025-08-29 20:58:18.292374 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.92s 2025-08-29 20:58:18.292385 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.90s 2025-08-29 20:58:18.292396 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.85s 2025-08-29 20:58:21.328561 | orchestrator | 2025-08-29 20:58:21 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:21.330693 | orchestrator | 2025-08-29 20:58:21 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:21.332135 | orchestrator | 2025-08-29 20:58:21 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:21.332559 | orchestrator | 2025-08-29 20:58:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:24.378956 | orchestrator | 2025-08-29 20:58:24 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:24.381736 | orchestrator | 2025-08-29 20:58:24 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:24.384227 | orchestrator | 2025-08-29 20:58:24 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:24.384255 | orchestrator | 2025-08-29 20:58:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:27.431860 | orchestrator | 2025-08-29 20:58:27 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:27.432937 | orchestrator | 2025-08-29 20:58:27 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:27.435698 | orchestrator | 2025-08-29 20:58:27 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:27.435724 | orchestrator | 2025-08-29 20:58:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:30.469453 | orchestrator | 2025-08-29 20:58:30 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:30.470478 | orchestrator | 2025-08-29 20:58:30 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:30.472131 | orchestrator | 2025-08-29 20:58:30 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:30.472152 | orchestrator | 2025-08-29 20:58:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:33.513671 | orchestrator | 2025-08-29 20:58:33 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:33.513782 | orchestrator | 2025-08-29 20:58:33 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:33.513796 | orchestrator | 2025-08-29 20:58:33 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:33.513808 | orchestrator | 2025-08-29 20:58:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:36.560430 | orchestrator | 2025-08-29 20:58:36 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:36.561754 | orchestrator | 2025-08-29 20:58:36 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:36.564514 | orchestrator | 2025-08-29 20:58:36 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:36.564652 | orchestrator | 2025-08-29 20:58:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:39.601703 | orchestrator | 2025-08-29 20:58:39 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:39.602463 | orchestrator | 2025-08-29 20:58:39 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:39.603072 | orchestrator | 2025-08-29 20:58:39 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:39.603099 | orchestrator | 2025-08-29 20:58:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:42.651472 | orchestrator | 2025-08-29 20:58:42 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:42.652844 | orchestrator | 2025-08-29 20:58:42 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:42.652894 | orchestrator | 2025-08-29 20:58:42 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:42.652907 | orchestrator | 2025-08-29 20:58:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:45.684872 | orchestrator | 2025-08-29 20:58:45 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:45.685055 | orchestrator | 2025-08-29 20:58:45 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:45.687001 | orchestrator | 2025-08-29 20:58:45 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:45.687027 | orchestrator | 2025-08-29 20:58:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:48.710876 | orchestrator | 2025-08-29 20:58:48 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:48.711044 | orchestrator | 2025-08-29 20:58:48 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:48.711794 | orchestrator | 2025-08-29 20:58:48 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:48.711819 | orchestrator | 2025-08-29 20:58:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:51.736197 | orchestrator | 2025-08-29 20:58:51 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:51.738775 | orchestrator | 2025-08-29 20:58:51 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:51.740638 | orchestrator | 2025-08-29 20:58:51 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:51.740662 | orchestrator | 2025-08-29 20:58:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:54.774981 | orchestrator | 2025-08-29 20:58:54 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:54.776361 | orchestrator | 2025-08-29 20:58:54 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:54.777845 | orchestrator | 2025-08-29 20:58:54 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:54.777869 | orchestrator | 2025-08-29 20:58:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:58:57.819866 | orchestrator | 2025-08-29 20:58:57 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:58:57.821023 | orchestrator | 2025-08-29 20:58:57 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:58:57.822722 | orchestrator | 2025-08-29 20:58:57 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:58:57.822970 | orchestrator | 2025-08-29 20:58:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:00.869260 | orchestrator | 2025-08-29 20:59:00 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:00.870700 | orchestrator | 2025-08-29 20:59:00 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:00.872132 | orchestrator | 2025-08-29 20:59:00 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:59:00.872161 | orchestrator | 2025-08-29 20:59:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:03.905718 | orchestrator | 2025-08-29 20:59:03 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:03.907316 | orchestrator | 2025-08-29 20:59:03 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:03.909342 | orchestrator | 2025-08-29 20:59:03 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:59:03.909443 | orchestrator | 2025-08-29 20:59:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:06.957206 | orchestrator | 2025-08-29 20:59:06 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:06.958383 | orchestrator | 2025-08-29 20:59:06 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:06.960447 | orchestrator | 2025-08-29 20:59:06 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:59:06.960706 | orchestrator | 2025-08-29 20:59:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:09.997676 | orchestrator | 2025-08-29 20:59:09 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:09.999805 | orchestrator | 2025-08-29 20:59:09 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:10.000784 | orchestrator | 2025-08-29 20:59:09 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:59:10.000813 | orchestrator | 2025-08-29 20:59:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:13.049000 | orchestrator | 2025-08-29 20:59:13 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:13.050247 | orchestrator | 2025-08-29 20:59:13 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:13.052603 | orchestrator | 2025-08-29 20:59:13 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:59:13.053103 | orchestrator | 2025-08-29 20:59:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:16.096636 | orchestrator | 2025-08-29 20:59:16 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:16.098388 | orchestrator | 2025-08-29 20:59:16 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:16.100642 | orchestrator | 2025-08-29 20:59:16 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:59:16.100693 | orchestrator | 2025-08-29 20:59:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:19.132655 | orchestrator | 2025-08-29 20:59:19 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:19.133145 | orchestrator | 2025-08-29 20:59:19 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:19.134009 | orchestrator | 2025-08-29 20:59:19 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state STARTED 2025-08-29 20:59:19.134462 | orchestrator | 2025-08-29 20:59:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:22.170775 | orchestrator | 2025-08-29 20:59:22 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:22.171403 | orchestrator | 2025-08-29 20:59:22 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:22.173218 | orchestrator | 2025-08-29 20:59:22 | INFO  | Task 17f494d1-9d33-4a39-bc9e-563384931e54 is in state SUCCESS 2025-08-29 20:59:22.173362 | orchestrator | 2025-08-29 20:59:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:22.175739 | orchestrator | 2025-08-29 20:59:22.175784 | orchestrator | 2025-08-29 20:59:22.175799 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 20:59:22.175811 | orchestrator | 2025-08-29 20:59:22.175822 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 20:59:22.175834 | orchestrator | Friday 29 August 2025 20:56:46 +0000 (0:00:00.152) 0:00:00.152 ********* 2025-08-29 20:59:22.175845 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:59:22.175858 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:59:22.175869 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:59:22.175880 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.175891 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.175902 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.175913 | orchestrator | 2025-08-29 20:59:22.175954 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 20:59:22.175966 | orchestrator | Friday 29 August 2025 20:56:47 +0000 (0:00:00.700) 0:00:00.852 ********* 2025-08-29 20:59:22.175977 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-08-29 20:59:22.176064 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-08-29 20:59:22.176078 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-08-29 20:59:22.176089 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-08-29 20:59:22.176100 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-08-29 20:59:22.176112 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-08-29 20:59:22.176123 | orchestrator | 2025-08-29 20:59:22.176134 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-08-29 20:59:22.176147 | orchestrator | 2025-08-29 20:59:22.176158 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-08-29 20:59:22.176170 | orchestrator | Friday 29 August 2025 20:56:48 +0000 (0:00:00.730) 0:00:01.583 ********* 2025-08-29 20:59:22.176182 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:59:22.176195 | orchestrator | 2025-08-29 20:59:22.176598 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-08-29 20:59:22.176612 | orchestrator | Friday 29 August 2025 20:56:49 +0000 (0:00:00.939) 0:00:02.522 ********* 2025-08-29 20:59:22.176627 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176695 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176761 | orchestrator | 2025-08-29 20:59:22.176772 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-08-29 20:59:22.176783 | orchestrator | Friday 29 August 2025 20:56:50 +0000 (0:00:01.059) 0:00:03.582 ********* 2025-08-29 20:59:22.176794 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176817 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176875 | orchestrator | 2025-08-29 20:59:22.176886 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-08-29 20:59:22.176897 | orchestrator | Friday 29 August 2025 20:56:51 +0000 (0:00:01.621) 0:00:05.204 ********* 2025-08-29 20:59:22.176909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.176991 | orchestrator | 2025-08-29 20:59:22.177002 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-08-29 20:59:22.177013 | orchestrator | Friday 29 August 2025 20:56:53 +0000 (0:00:01.386) 0:00:06.590 ********* 2025-08-29 20:59:22.177029 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177040 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177103 | orchestrator | 2025-08-29 20:59:22.177114 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-08-29 20:59:22.177125 | orchestrator | Friday 29 August 2025 20:56:54 +0000 (0:00:01.484) 0:00:08.074 ********* 2025-08-29 20:59:22.177136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.177214 | orchestrator | 2025-08-29 20:59:22.177225 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-08-29 20:59:22.177235 | orchestrator | Friday 29 August 2025 20:56:56 +0000 (0:00:01.499) 0:00:09.574 ********* 2025-08-29 20:59:22.177247 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:59:22.177259 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:59:22.177269 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:59:22.177280 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:59:22.177291 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:59:22.177301 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:59:22.177312 | orchestrator | 2025-08-29 20:59:22.177323 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-08-29 20:59:22.177334 | orchestrator | Friday 29 August 2025 20:56:58 +0000 (0:00:02.805) 0:00:12.379 ********* 2025-08-29 20:59:22.177344 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-08-29 20:59:22.177355 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-08-29 20:59:22.177366 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-08-29 20:59:22.177382 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-08-29 20:59:22.177401 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-08-29 20:59:22.177411 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-08-29 20:59:22.177422 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 20:59:22.177433 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 20:59:22.177444 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 20:59:22.177455 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 20:59:22.177465 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 20:59:22.177476 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 20:59:22.177487 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 20:59:22.177530 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 20:59:22.177542 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 20:59:22.177553 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 20:59:22.177564 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 20:59:22.177575 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 20:59:22.177586 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 20:59:22.177599 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 20:59:22.177610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 20:59:22.177620 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 20:59:22.177637 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 20:59:22.177648 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 20:59:22.177659 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 20:59:22.177670 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 20:59:22.177680 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 20:59:22.177691 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 20:59:22.177702 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 20:59:22.177713 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 20:59:22.177723 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 20:59:22.177734 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 20:59:22.177745 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 20:59:22.177756 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 20:59:22.177774 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 20:59:22.177785 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 20:59:22.177796 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 20:59:22.177807 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 20:59:22.177818 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 20:59:22.177829 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 20:59:22.177845 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 20:59:22.177857 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 20:59:22.177868 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-08-29 20:59:22.177879 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-08-29 20:59:22.177890 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-08-29 20:59:22.177902 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-08-29 20:59:22.177913 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-08-29 20:59:22.177924 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 20:59:22.177935 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 20:59:22.177946 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 20:59:22.177957 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-08-29 20:59:22.177968 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 20:59:22.177979 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 20:59:22.177990 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 20:59:22.178001 | orchestrator | 2025-08-29 20:59:22.178012 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 20:59:22.178078 | orchestrator | Friday 29 August 2025 20:57:19 +0000 (0:00:20.106) 0:00:32.485 ********* 2025-08-29 20:59:22.178090 | orchestrator | 2025-08-29 20:59:22.178101 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 20:59:22.178112 | orchestrator | Friday 29 August 2025 20:57:19 +0000 (0:00:00.240) 0:00:32.726 ********* 2025-08-29 20:59:22.178123 | orchestrator | 2025-08-29 20:59:22.178134 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 20:59:22.178150 | orchestrator | Friday 29 August 2025 20:57:19 +0000 (0:00:00.094) 0:00:32.821 ********* 2025-08-29 20:59:22.178161 | orchestrator | 2025-08-29 20:59:22.178172 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 20:59:22.178183 | orchestrator | Friday 29 August 2025 20:57:19 +0000 (0:00:00.081) 0:00:32.902 ********* 2025-08-29 20:59:22.178201 | orchestrator | 2025-08-29 20:59:22.178212 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 20:59:22.178223 | orchestrator | Friday 29 August 2025 20:57:19 +0000 (0:00:00.076) 0:00:32.979 ********* 2025-08-29 20:59:22.178234 | orchestrator | 2025-08-29 20:59:22.178245 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 20:59:22.178256 | orchestrator | Friday 29 August 2025 20:57:19 +0000 (0:00:00.062) 0:00:33.041 ********* 2025-08-29 20:59:22.178267 | orchestrator | 2025-08-29 20:59:22.178277 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-08-29 20:59:22.178288 | orchestrator | Friday 29 August 2025 20:57:19 +0000 (0:00:00.064) 0:00:33.105 ********* 2025-08-29 20:59:22.178299 | orchestrator | ok: [testbed-node-4] 2025-08-29 20:59:22.178310 | orchestrator | ok: [testbed-node-3] 2025-08-29 20:59:22.178321 | orchestrator | ok: [testbed-node-5] 2025-08-29 20:59:22.178332 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.178343 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.178354 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.178364 | orchestrator | 2025-08-29 20:59:22.178375 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-08-29 20:59:22.178386 | orchestrator | Friday 29 August 2025 20:57:21 +0000 (0:00:02.151) 0:00:35.256 ********* 2025-08-29 20:59:22.178397 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:59:22.178408 | orchestrator | changed: [testbed-node-3] 2025-08-29 20:59:22.178419 | orchestrator | changed: [testbed-node-5] 2025-08-29 20:59:22.178430 | orchestrator | changed: [testbed-node-4] 2025-08-29 20:59:22.178440 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:59:22.178451 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:59:22.178461 | orchestrator | 2025-08-29 20:59:22.178472 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-08-29 20:59:22.178483 | orchestrator | 2025-08-29 20:59:22.178543 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 20:59:22.178556 | orchestrator | Friday 29 August 2025 20:57:57 +0000 (0:00:35.345) 0:01:10.602 ********* 2025-08-29 20:59:22.178567 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:59:22.178578 | orchestrator | 2025-08-29 20:59:22.178588 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 20:59:22.178599 | orchestrator | Friday 29 August 2025 20:57:57 +0000 (0:00:00.665) 0:01:11.267 ********* 2025-08-29 20:59:22.178610 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:59:22.178621 | orchestrator | 2025-08-29 20:59:22.178640 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-08-29 20:59:22.178651 | orchestrator | Friday 29 August 2025 20:57:58 +0000 (0:00:00.524) 0:01:11.792 ********* 2025-08-29 20:59:22.178662 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.178673 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.178683 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.178694 | orchestrator | 2025-08-29 20:59:22.178705 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-08-29 20:59:22.178716 | orchestrator | Friday 29 August 2025 20:57:59 +0000 (0:00:00.976) 0:01:12.768 ********* 2025-08-29 20:59:22.178727 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.178738 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.178749 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.178759 | orchestrator | 2025-08-29 20:59:22.178770 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-08-29 20:59:22.178781 | orchestrator | Friday 29 August 2025 20:57:59 +0000 (0:00:00.365) 0:01:13.133 ********* 2025-08-29 20:59:22.178792 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.178803 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.178813 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.178824 | orchestrator | 2025-08-29 20:59:22.178842 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-08-29 20:59:22.178853 | orchestrator | Friday 29 August 2025 20:58:00 +0000 (0:00:00.319) 0:01:13.453 ********* 2025-08-29 20:59:22.178864 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.178874 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.178885 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.178895 | orchestrator | 2025-08-29 20:59:22.178906 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-08-29 20:59:22.178917 | orchestrator | Friday 29 August 2025 20:58:00 +0000 (0:00:00.307) 0:01:13.760 ********* 2025-08-29 20:59:22.178928 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.178939 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.178949 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.178960 | orchestrator | 2025-08-29 20:59:22.178970 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-08-29 20:59:22.178981 | orchestrator | Friday 29 August 2025 20:58:00 +0000 (0:00:00.469) 0:01:14.230 ********* 2025-08-29 20:59:22.178992 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179003 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179014 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179024 | orchestrator | 2025-08-29 20:59:22.179035 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-08-29 20:59:22.179046 | orchestrator | Friday 29 August 2025 20:58:01 +0000 (0:00:00.300) 0:01:14.531 ********* 2025-08-29 20:59:22.179057 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179068 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179079 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179089 | orchestrator | 2025-08-29 20:59:22.179100 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-08-29 20:59:22.179111 | orchestrator | Friday 29 August 2025 20:58:01 +0000 (0:00:00.296) 0:01:14.827 ********* 2025-08-29 20:59:22.179122 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179133 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179143 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179154 | orchestrator | 2025-08-29 20:59:22.179165 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-08-29 20:59:22.179176 | orchestrator | Friday 29 August 2025 20:58:01 +0000 (0:00:00.281) 0:01:15.109 ********* 2025-08-29 20:59:22.179186 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179233 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179245 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179256 | orchestrator | 2025-08-29 20:59:22.179267 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-08-29 20:59:22.179278 | orchestrator | Friday 29 August 2025 20:58:02 +0000 (0:00:00.444) 0:01:15.553 ********* 2025-08-29 20:59:22.179288 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179299 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179310 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179321 | orchestrator | 2025-08-29 20:59:22.179331 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-08-29 20:59:22.179342 | orchestrator | Friday 29 August 2025 20:58:02 +0000 (0:00:00.281) 0:01:15.835 ********* 2025-08-29 20:59:22.179353 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179364 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179375 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179385 | orchestrator | 2025-08-29 20:59:22.179396 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-08-29 20:59:22.179406 | orchestrator | Friday 29 August 2025 20:58:02 +0000 (0:00:00.266) 0:01:16.102 ********* 2025-08-29 20:59:22.179417 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179428 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179438 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179449 | orchestrator | 2025-08-29 20:59:22.179460 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-08-29 20:59:22.179477 | orchestrator | Friday 29 August 2025 20:58:02 +0000 (0:00:00.261) 0:01:16.363 ********* 2025-08-29 20:59:22.179488 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179515 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179526 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179537 | orchestrator | 2025-08-29 20:59:22.179548 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-08-29 20:59:22.179559 | orchestrator | Friday 29 August 2025 20:58:03 +0000 (0:00:00.485) 0:01:16.849 ********* 2025-08-29 20:59:22.179569 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179580 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179591 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179602 | orchestrator | 2025-08-29 20:59:22.179613 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-08-29 20:59:22.179624 | orchestrator | Friday 29 August 2025 20:58:03 +0000 (0:00:00.309) 0:01:17.158 ********* 2025-08-29 20:59:22.179634 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179645 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179656 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179667 | orchestrator | 2025-08-29 20:59:22.179684 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-08-29 20:59:22.179695 | orchestrator | Friday 29 August 2025 20:58:04 +0000 (0:00:00.297) 0:01:17.456 ********* 2025-08-29 20:59:22.179706 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179717 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179728 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179738 | orchestrator | 2025-08-29 20:59:22.179749 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-08-29 20:59:22.179760 | orchestrator | Friday 29 August 2025 20:58:04 +0000 (0:00:00.301) 0:01:17.757 ********* 2025-08-29 20:59:22.179771 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.179782 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.179793 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.179803 | orchestrator | 2025-08-29 20:59:22.179814 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 20:59:22.179825 | orchestrator | Friday 29 August 2025 20:58:04 +0000 (0:00:00.526) 0:01:18.284 ********* 2025-08-29 20:59:22.179836 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 20:59:22.179847 | orchestrator | 2025-08-29 20:59:22.179858 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-08-29 20:59:22.179868 | orchestrator | Friday 29 August 2025 20:58:05 +0000 (0:00:00.614) 0:01:18.899 ********* 2025-08-29 20:59:22.179879 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.179890 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.179901 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.179912 | orchestrator | 2025-08-29 20:59:22.179923 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-08-29 20:59:22.179934 | orchestrator | Friday 29 August 2025 20:58:05 +0000 (0:00:00.429) 0:01:19.328 ********* 2025-08-29 20:59:22.179944 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.179955 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.179966 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.179977 | orchestrator | 2025-08-29 20:59:22.179987 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-08-29 20:59:22.179998 | orchestrator | Friday 29 August 2025 20:58:06 +0000 (0:00:00.635) 0:01:19.964 ********* 2025-08-29 20:59:22.180009 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.180020 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.180031 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.180041 | orchestrator | 2025-08-29 20:59:22.180052 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-08-29 20:59:22.180063 | orchestrator | Friday 29 August 2025 20:58:06 +0000 (0:00:00.336) 0:01:20.300 ********* 2025-08-29 20:59:22.180080 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.180091 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.180102 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.180113 | orchestrator | 2025-08-29 20:59:22.180123 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-08-29 20:59:22.180134 | orchestrator | Friday 29 August 2025 20:58:07 +0000 (0:00:00.322) 0:01:20.622 ********* 2025-08-29 20:59:22.180145 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.180156 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.180167 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.180177 | orchestrator | 2025-08-29 20:59:22.180193 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-08-29 20:59:22.180204 | orchestrator | Friday 29 August 2025 20:58:07 +0000 (0:00:00.339) 0:01:20.962 ********* 2025-08-29 20:59:22.180215 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.180226 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.180237 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.180248 | orchestrator | 2025-08-29 20:59:22.180259 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-08-29 20:59:22.180270 | orchestrator | Friday 29 August 2025 20:58:08 +0000 (0:00:00.502) 0:01:21.464 ********* 2025-08-29 20:59:22.180280 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.180291 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.180302 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.180313 | orchestrator | 2025-08-29 20:59:22.180324 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-08-29 20:59:22.180335 | orchestrator | Friday 29 August 2025 20:58:08 +0000 (0:00:00.310) 0:01:21.774 ********* 2025-08-29 20:59:22.180346 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.180356 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.180367 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.180378 | orchestrator | 2025-08-29 20:59:22.180389 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 20:59:22.180400 | orchestrator | Friday 29 August 2025 20:58:08 +0000 (0:00:00.306) 0:01:22.080 ********* 2025-08-29 20:59:22.180412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180559 | orchestrator | 2025-08-29 20:59:22.180570 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 20:59:22.180582 | orchestrator | Friday 29 August 2025 20:58:10 +0000 (0:00:01.540) 0:01:23.621 ********* 2025-08-29 20:59:22.180593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180717 | orchestrator | 2025-08-29 20:59:22.180728 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 20:59:22.180739 | orchestrator | Friday 29 August 2025 20:58:14 +0000 (0:00:04.062) 0:01:27.685 ********* 2025-08-29 20:59:22.180750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.180869 | orchestrator | 2025-08-29 20:59:22.180881 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 20:59:22.180892 | orchestrator | Friday 29 August 2025 20:58:16 +0000 (0:00:02.696) 0:01:30.382 ********* 2025-08-29 20:59:22.180903 | orchestrator | 2025-08-29 20:59:22.180914 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 20:59:22.180925 | orchestrator | Friday 29 August 2025 20:58:17 +0000 (0:00:00.082) 0:01:30.465 ********* 2025-08-29 20:59:22.180936 | orchestrator | 2025-08-29 20:59:22.180946 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 20:59:22.180957 | orchestrator | Friday 29 August 2025 20:58:17 +0000 (0:00:00.078) 0:01:30.543 ********* 2025-08-29 20:59:22.180968 | orchestrator | 2025-08-29 20:59:22.180979 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 20:59:22.180990 | orchestrator | Friday 29 August 2025 20:58:17 +0000 (0:00:00.067) 0:01:30.611 ********* 2025-08-29 20:59:22.181001 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:59:22.181011 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:59:22.181022 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:59:22.181033 | orchestrator | 2025-08-29 20:59:22.181044 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 20:59:22.181055 | orchestrator | Friday 29 August 2025 20:58:25 +0000 (0:00:07.900) 0:01:38.511 ********* 2025-08-29 20:59:22.181066 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:59:22.181076 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:59:22.181087 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:59:22.181098 | orchestrator | 2025-08-29 20:59:22.181109 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 20:59:22.181126 | orchestrator | Friday 29 August 2025 20:58:33 +0000 (0:00:08.182) 0:01:46.694 ********* 2025-08-29 20:59:22.181136 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:59:22.181147 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:59:22.181158 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:59:22.181169 | orchestrator | 2025-08-29 20:59:22.181180 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 20:59:22.181191 | orchestrator | Friday 29 August 2025 20:58:41 +0000 (0:00:07.773) 0:01:54.467 ********* 2025-08-29 20:59:22.181201 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.181212 | orchestrator | 2025-08-29 20:59:22.181223 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 20:59:22.181234 | orchestrator | Friday 29 August 2025 20:58:41 +0000 (0:00:00.113) 0:01:54.581 ********* 2025-08-29 20:59:22.181244 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.181255 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.181346 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.181361 | orchestrator | 2025-08-29 20:59:22.181378 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 20:59:22.181390 | orchestrator | Friday 29 August 2025 20:58:42 +0000 (0:00:00.950) 0:01:55.532 ********* 2025-08-29 20:59:22.181401 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.181412 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.181422 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:59:22.181433 | orchestrator | 2025-08-29 20:59:22.181444 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 20:59:22.181455 | orchestrator | Friday 29 August 2025 20:58:42 +0000 (0:00:00.605) 0:01:56.137 ********* 2025-08-29 20:59:22.181465 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.181476 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.181487 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.181520 | orchestrator | 2025-08-29 20:59:22.181531 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 20:59:22.181542 | orchestrator | Friday 29 August 2025 20:58:43 +0000 (0:00:00.910) 0:01:57.048 ********* 2025-08-29 20:59:22.181553 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.181564 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.181575 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:59:22.181585 | orchestrator | 2025-08-29 20:59:22.181596 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 20:59:22.181607 | orchestrator | Friday 29 August 2025 20:58:44 +0000 (0:00:00.582) 0:01:57.630 ********* 2025-08-29 20:59:22.181618 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.181629 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.181640 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.181651 | orchestrator | 2025-08-29 20:59:22.181662 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 20:59:22.181672 | orchestrator | Friday 29 August 2025 20:58:45 +0000 (0:00:00.824) 0:01:58.454 ********* 2025-08-29 20:59:22.181683 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.181694 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.181705 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.181716 | orchestrator | 2025-08-29 20:59:22.181727 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-08-29 20:59:22.181738 | orchestrator | Friday 29 August 2025 20:58:45 +0000 (0:00:00.751) 0:01:59.206 ********* 2025-08-29 20:59:22.181749 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.181759 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.181770 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.181781 | orchestrator | 2025-08-29 20:59:22.181792 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 20:59:22.181803 | orchestrator | Friday 29 August 2025 20:58:46 +0000 (0:00:00.630) 0:01:59.836 ********* 2025-08-29 20:59:22.181814 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.181841 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.181853 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.181865 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.181876 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.181887 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.181904 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.181916 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.181927 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.181939 | orchestrator | 2025-08-29 20:59:22.181950 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 20:59:22.181961 | orchestrator | Friday 29 August 2025 20:58:47 +0000 (0:00:01.408) 0:02:01.245 ********* 2025-08-29 20:59:22.181972 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.181990 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182006 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182060 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182104 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182139 | orchestrator | 2025-08-29 20:59:22.182150 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 20:59:22.182219 | orchestrator | Friday 29 August 2025 20:58:51 +0000 (0:00:03.902) 0:02:05.147 ********* 2025-08-29 20:59:22.182247 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182259 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182276 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182299 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182340 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 20:59:22.182363 | orchestrator | 2025-08-29 20:59:22.182374 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 20:59:22.182385 | orchestrator | Friday 29 August 2025 20:58:54 +0000 (0:00:02.579) 0:02:07.727 ********* 2025-08-29 20:59:22.182403 | orchestrator | 2025-08-29 20:59:22.182414 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 20:59:22.182425 | orchestrator | Friday 29 August 2025 20:58:54 +0000 (0:00:00.064) 0:02:07.792 ********* 2025-08-29 20:59:22.182435 | orchestrator | 2025-08-29 20:59:22.182446 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 20:59:22.182457 | orchestrator | Friday 29 August 2025 20:58:55 +0000 (0:00:00.672) 0:02:08.465 ********* 2025-08-29 20:59:22.182468 | orchestrator | 2025-08-29 20:59:22.182479 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 20:59:22.182490 | orchestrator | Friday 29 August 2025 20:58:55 +0000 (0:00:00.282) 0:02:08.747 ********* 2025-08-29 20:59:22.182571 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:59:22.182582 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:59:22.182593 | orchestrator | 2025-08-29 20:59:22.182614 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 20:59:22.182626 | orchestrator | Friday 29 August 2025 20:59:01 +0000 (0:00:06.486) 0:02:15.234 ********* 2025-08-29 20:59:22.182637 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:59:22.182648 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:59:22.182669 | orchestrator | 2025-08-29 20:59:22.182681 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 20:59:22.182692 | orchestrator | Friday 29 August 2025 20:59:07 +0000 (0:00:06.182) 0:02:21.417 ********* 2025-08-29 20:59:22.182703 | orchestrator | changed: [testbed-node-1] 2025-08-29 20:59:22.182713 | orchestrator | changed: [testbed-node-2] 2025-08-29 20:59:22.182724 | orchestrator | 2025-08-29 20:59:22.182735 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 20:59:22.182746 | orchestrator | Friday 29 August 2025 20:59:14 +0000 (0:00:06.357) 0:02:27.774 ********* 2025-08-29 20:59:22.182756 | orchestrator | skipping: [testbed-node-0] 2025-08-29 20:59:22.182767 | orchestrator | 2025-08-29 20:59:22.182778 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 20:59:22.182795 | orchestrator | Friday 29 August 2025 20:59:14 +0000 (0:00:00.141) 0:02:27.915 ********* 2025-08-29 20:59:22.182806 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.182817 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.182828 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.182838 | orchestrator | 2025-08-29 20:59:22.182849 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 20:59:22.182860 | orchestrator | Friday 29 August 2025 20:59:15 +0000 (0:00:00.723) 0:02:28.638 ********* 2025-08-29 20:59:22.182871 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.182882 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.182892 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:59:22.182903 | orchestrator | 2025-08-29 20:59:22.182914 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 20:59:22.182925 | orchestrator | Friday 29 August 2025 20:59:15 +0000 (0:00:00.605) 0:02:29.244 ********* 2025-08-29 20:59:22.182935 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.182946 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.182957 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.182967 | orchestrator | 2025-08-29 20:59:22.182978 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 20:59:22.182989 | orchestrator | Friday 29 August 2025 20:59:16 +0000 (0:00:00.797) 0:02:30.042 ********* 2025-08-29 20:59:22.183000 | orchestrator | skipping: [testbed-node-1] 2025-08-29 20:59:22.183011 | orchestrator | skipping: [testbed-node-2] 2025-08-29 20:59:22.183022 | orchestrator | changed: [testbed-node-0] 2025-08-29 20:59:22.183032 | orchestrator | 2025-08-29 20:59:22.183043 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 20:59:22.183054 | orchestrator | Friday 29 August 2025 20:59:17 +0000 (0:00:00.573) 0:02:30.615 ********* 2025-08-29 20:59:22.183064 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.183081 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.183090 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.183100 | orchestrator | 2025-08-29 20:59:22.183109 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 20:59:22.183119 | orchestrator | Friday 29 August 2025 20:59:17 +0000 (0:00:00.688) 0:02:31.304 ********* 2025-08-29 20:59:22.183129 | orchestrator | ok: [testbed-node-0] 2025-08-29 20:59:22.183138 | orchestrator | ok: [testbed-node-2] 2025-08-29 20:59:22.183148 | orchestrator | ok: [testbed-node-1] 2025-08-29 20:59:22.183157 | orchestrator | 2025-08-29 20:59:22.183166 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 20:59:22.183176 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 20:59:22.183187 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 20:59:22.183203 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 20:59:22.183213 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:59:22.183223 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:59:22.183233 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 20:59:22.183242 | orchestrator | 2025-08-29 20:59:22.183252 | orchestrator | 2025-08-29 20:59:22.183261 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 20:59:22.183271 | orchestrator | Friday 29 August 2025 20:59:19 +0000 (0:00:01.169) 0:02:32.474 ********* 2025-08-29 20:59:22.183280 | orchestrator | =============================================================================== 2025-08-29 20:59:22.183290 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.35s 2025-08-29 20:59:22.183299 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.11s 2025-08-29 20:59:22.183309 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.39s 2025-08-29 20:59:22.183319 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.37s 2025-08-29 20:59:22.183328 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.13s 2025-08-29 20:59:22.183337 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.06s 2025-08-29 20:59:22.183347 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.90s 2025-08-29 20:59:22.183357 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.81s 2025-08-29 20:59:22.183366 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.70s 2025-08-29 20:59:22.183376 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.58s 2025-08-29 20:59:22.183385 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.15s 2025-08-29 20:59:22.183394 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.62s 2025-08-29 20:59:22.183404 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.54s 2025-08-29 20:59:22.183413 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.50s 2025-08-29 20:59:22.183423 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.48s 2025-08-29 20:59:22.183432 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2025-08-29 20:59:22.183446 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.39s 2025-08-29 20:59:22.183462 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.17s 2025-08-29 20:59:22.183472 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.06s 2025-08-29 20:59:22.183481 | orchestrator | ovn-db : Flush handlers ------------------------------------------------- 1.02s 2025-08-29 20:59:25.206318 | orchestrator | 2025-08-29 20:59:25 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:25.207436 | orchestrator | 2025-08-29 20:59:25 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:25.207461 | orchestrator | 2025-08-29 20:59:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:28.254976 | orchestrator | 2025-08-29 20:59:28 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:28.255638 | orchestrator | 2025-08-29 20:59:28 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:28.256390 | orchestrator | 2025-08-29 20:59:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:31.307708 | orchestrator | 2025-08-29 20:59:31 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:31.309077 | orchestrator | 2025-08-29 20:59:31 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:31.309115 | orchestrator | 2025-08-29 20:59:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:34.352251 | orchestrator | 2025-08-29 20:59:34 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:34.354830 | orchestrator | 2025-08-29 20:59:34 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:34.354910 | orchestrator | 2025-08-29 20:59:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:37.389795 | orchestrator | 2025-08-29 20:59:37 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:37.390414 | orchestrator | 2025-08-29 20:59:37 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:37.390444 | orchestrator | 2025-08-29 20:59:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:40.422541 | orchestrator | 2025-08-29 20:59:40 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:40.422930 | orchestrator | 2025-08-29 20:59:40 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:40.422963 | orchestrator | 2025-08-29 20:59:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:43.475593 | orchestrator | 2025-08-29 20:59:43 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:43.478084 | orchestrator | 2025-08-29 20:59:43 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:43.478129 | orchestrator | 2025-08-29 20:59:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:46.506903 | orchestrator | 2025-08-29 20:59:46 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:46.508236 | orchestrator | 2025-08-29 20:59:46 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:46.508750 | orchestrator | 2025-08-29 20:59:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:49.540396 | orchestrator | 2025-08-29 20:59:49 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:49.540989 | orchestrator | 2025-08-29 20:59:49 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:49.541014 | orchestrator | 2025-08-29 20:59:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:52.573775 | orchestrator | 2025-08-29 20:59:52 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:52.574344 | orchestrator | 2025-08-29 20:59:52 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:52.574377 | orchestrator | 2025-08-29 20:59:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:55.621383 | orchestrator | 2025-08-29 20:59:55 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:55.621521 | orchestrator | 2025-08-29 20:59:55 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:55.621539 | orchestrator | 2025-08-29 20:59:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 20:59:58.655655 | orchestrator | 2025-08-29 20:59:58 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 20:59:58.657246 | orchestrator | 2025-08-29 20:59:58 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 20:59:58.657622 | orchestrator | 2025-08-29 20:59:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:01.700002 | orchestrator | 2025-08-29 21:00:01 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:01.703318 | orchestrator | 2025-08-29 21:00:01 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:01.703362 | orchestrator | 2025-08-29 21:00:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:04.743931 | orchestrator | 2025-08-29 21:00:04 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:04.745272 | orchestrator | 2025-08-29 21:00:04 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:04.745319 | orchestrator | 2025-08-29 21:00:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:07.793131 | orchestrator | 2025-08-29 21:00:07 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:07.796156 | orchestrator | 2025-08-29 21:00:07 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:07.796196 | orchestrator | 2025-08-29 21:00:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:10.840414 | orchestrator | 2025-08-29 21:00:10 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:10.841224 | orchestrator | 2025-08-29 21:00:10 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:10.841255 | orchestrator | 2025-08-29 21:00:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:13.870420 | orchestrator | 2025-08-29 21:00:13 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:13.870767 | orchestrator | 2025-08-29 21:00:13 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:13.870790 | orchestrator | 2025-08-29 21:00:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:16.924826 | orchestrator | 2025-08-29 21:00:16 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:16.927564 | orchestrator | 2025-08-29 21:00:16 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:16.928035 | orchestrator | 2025-08-29 21:00:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:19.975759 | orchestrator | 2025-08-29 21:00:19 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:19.976884 | orchestrator | 2025-08-29 21:00:19 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:19.977431 | orchestrator | 2025-08-29 21:00:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:23.024048 | orchestrator | 2025-08-29 21:00:23 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:23.025393 | orchestrator | 2025-08-29 21:00:23 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:23.025517 | orchestrator | 2025-08-29 21:00:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:26.067020 | orchestrator | 2025-08-29 21:00:26 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:26.070121 | orchestrator | 2025-08-29 21:00:26 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:26.071521 | orchestrator | 2025-08-29 21:00:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:29.116176 | orchestrator | 2025-08-29 21:00:29 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:29.118227 | orchestrator | 2025-08-29 21:00:29 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:29.118310 | orchestrator | 2025-08-29 21:00:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:32.158199 | orchestrator | 2025-08-29 21:00:32 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:32.158880 | orchestrator | 2025-08-29 21:00:32 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:32.159094 | orchestrator | 2025-08-29 21:00:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:35.217696 | orchestrator | 2025-08-29 21:00:35 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:35.219535 | orchestrator | 2025-08-29 21:00:35 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:35.220120 | orchestrator | 2025-08-29 21:00:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:38.267925 | orchestrator | 2025-08-29 21:00:38 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:38.269881 | orchestrator | 2025-08-29 21:00:38 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:38.269913 | orchestrator | 2025-08-29 21:00:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:41.307527 | orchestrator | 2025-08-29 21:00:41 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:41.308460 | orchestrator | 2025-08-29 21:00:41 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:41.308568 | orchestrator | 2025-08-29 21:00:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:44.364527 | orchestrator | 2025-08-29 21:00:44 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:44.366732 | orchestrator | 2025-08-29 21:00:44 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:44.366758 | orchestrator | 2025-08-29 21:00:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:47.414763 | orchestrator | 2025-08-29 21:00:47 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:47.416536 | orchestrator | 2025-08-29 21:00:47 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:47.416571 | orchestrator | 2025-08-29 21:00:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:50.457946 | orchestrator | 2025-08-29 21:00:50 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:50.458894 | orchestrator | 2025-08-29 21:00:50 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:50.458918 | orchestrator | 2025-08-29 21:00:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:53.491908 | orchestrator | 2025-08-29 21:00:53 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:53.492885 | orchestrator | 2025-08-29 21:00:53 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:53.493248 | orchestrator | 2025-08-29 21:00:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:56.538820 | orchestrator | 2025-08-29 21:00:56 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:56.541060 | orchestrator | 2025-08-29 21:00:56 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:56.541181 | orchestrator | 2025-08-29 21:00:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:00:59.591382 | orchestrator | 2025-08-29 21:00:59 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:00:59.592332 | orchestrator | 2025-08-29 21:00:59 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:00:59.592370 | orchestrator | 2025-08-29 21:00:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:02.627502 | orchestrator | 2025-08-29 21:01:02 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:02.627578 | orchestrator | 2025-08-29 21:01:02 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:02.627595 | orchestrator | 2025-08-29 21:01:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:05.669790 | orchestrator | 2025-08-29 21:01:05 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:05.669884 | orchestrator | 2025-08-29 21:01:05 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:05.669901 | orchestrator | 2025-08-29 21:01:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:08.710447 | orchestrator | 2025-08-29 21:01:08 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:08.712112 | orchestrator | 2025-08-29 21:01:08 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:08.712386 | orchestrator | 2025-08-29 21:01:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:11.745876 | orchestrator | 2025-08-29 21:01:11 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:11.747896 | orchestrator | 2025-08-29 21:01:11 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:11.747944 | orchestrator | 2025-08-29 21:01:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:14.793667 | orchestrator | 2025-08-29 21:01:14 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:14.794833 | orchestrator | 2025-08-29 21:01:14 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:14.794860 | orchestrator | 2025-08-29 21:01:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:17.833073 | orchestrator | 2025-08-29 21:01:17 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:17.836803 | orchestrator | 2025-08-29 21:01:17 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:17.836842 | orchestrator | 2025-08-29 21:01:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:20.882502 | orchestrator | 2025-08-29 21:01:20 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:20.885429 | orchestrator | 2025-08-29 21:01:20 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:20.885455 | orchestrator | 2025-08-29 21:01:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:23.929372 | orchestrator | 2025-08-29 21:01:23 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:23.931352 | orchestrator | 2025-08-29 21:01:23 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:23.931387 | orchestrator | 2025-08-29 21:01:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:26.969167 | orchestrator | 2025-08-29 21:01:26 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:26.970908 | orchestrator | 2025-08-29 21:01:26 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:26.971571 | orchestrator | 2025-08-29 21:01:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:30.021469 | orchestrator | 2025-08-29 21:01:30 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:30.023386 | orchestrator | 2025-08-29 21:01:30 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:30.024135 | orchestrator | 2025-08-29 21:01:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:33.067472 | orchestrator | 2025-08-29 21:01:33 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:33.069041 | orchestrator | 2025-08-29 21:01:33 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:33.069058 | orchestrator | 2025-08-29 21:01:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:36.113769 | orchestrator | 2025-08-29 21:01:36 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:36.114177 | orchestrator | 2025-08-29 21:01:36 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state STARTED 2025-08-29 21:01:36.114245 | orchestrator | 2025-08-29 21:01:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:39.158617 | orchestrator | 2025-08-29 21:01:39 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:39.163445 | orchestrator | 2025-08-29 21:01:39 | INFO  | Task 925653dd-20f5-4967-9def-7e7e85f8cb5b is in state SUCCESS 2025-08-29 21:01:39.165973 | orchestrator | 2025-08-29 21:01:39.166007 | orchestrator | 2025-08-29 21:01:39.166078 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:01:39.166499 | orchestrator | 2025-08-29 21:01:39.166519 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:01:39.166530 | orchestrator | Friday 29 August 2025 20:55:37 +0000 (0:00:00.499) 0:00:00.499 ********* 2025-08-29 21:01:39.166542 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.166554 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.166565 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.166576 | orchestrator | 2025-08-29 21:01:39.166588 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:01:39.166599 | orchestrator | Friday 29 August 2025 20:55:37 +0000 (0:00:00.370) 0:00:00.870 ********* 2025-08-29 21:01:39.166610 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-08-29 21:01:39.166622 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-08-29 21:01:39.166633 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-08-29 21:01:39.166644 | orchestrator | 2025-08-29 21:01:39.166655 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-08-29 21:01:39.166690 | orchestrator | 2025-08-29 21:01:39.166702 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 21:01:39.166713 | orchestrator | Friday 29 August 2025 20:55:38 +0000 (0:00:00.521) 0:00:01.392 ********* 2025-08-29 21:01:39.166724 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.166735 | orchestrator | 2025-08-29 21:01:39.166759 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-08-29 21:01:39.166771 | orchestrator | Friday 29 August 2025 20:55:39 +0000 (0:00:00.974) 0:00:02.367 ********* 2025-08-29 21:01:39.166782 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.166793 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.166804 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.166892 | orchestrator | 2025-08-29 21:01:39.166906 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 21:01:39.166918 | orchestrator | Friday 29 August 2025 20:55:40 +0000 (0:00:01.132) 0:00:03.499 ********* 2025-08-29 21:01:39.166929 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.168247 | orchestrator | 2025-08-29 21:01:39.168281 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-08-29 21:01:39.168935 | orchestrator | Friday 29 August 2025 20:55:41 +0000 (0:00:00.705) 0:00:04.204 ********* 2025-08-29 21:01:39.168951 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.168962 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.168973 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.168984 | orchestrator | 2025-08-29 21:01:39.168995 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-08-29 21:01:39.169007 | orchestrator | Friday 29 August 2025 20:55:42 +0000 (0:00:00.735) 0:00:04.940 ********* 2025-08-29 21:01:39.169018 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 21:01:39.169029 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 21:01:39.169040 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 21:01:39.169051 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 21:01:39.169062 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 21:01:39.169073 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 21:01:39.169084 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 21:01:39.169096 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 21:01:39.169107 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 21:01:39.169118 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 21:01:39.169129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 21:01:39.169140 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 21:01:39.169151 | orchestrator | 2025-08-29 21:01:39.169161 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 21:01:39.169172 | orchestrator | Friday 29 August 2025 20:55:44 +0000 (0:00:02.717) 0:00:07.657 ********* 2025-08-29 21:01:39.169183 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 21:01:39.169195 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 21:01:39.169228 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 21:01:39.169239 | orchestrator | 2025-08-29 21:01:39.169250 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 21:01:39.169261 | orchestrator | Friday 29 August 2025 20:55:45 +0000 (0:00:00.871) 0:00:08.529 ********* 2025-08-29 21:01:39.169287 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 21:01:39.169298 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 21:01:39.169309 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 21:01:39.169320 | orchestrator | 2025-08-29 21:01:39.169332 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 21:01:39.169343 | orchestrator | Friday 29 August 2025 20:55:47 +0000 (0:00:01.561) 0:00:10.091 ********* 2025-08-29 21:01:39.169355 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-08-29 21:01:39.170081 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.170117 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-08-29 21:01:39.170129 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.170140 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-08-29 21:01:39.170151 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.170162 | orchestrator | 2025-08-29 21:01:39.170173 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-08-29 21:01:39.170185 | orchestrator | Friday 29 August 2025 20:55:48 +0000 (0:00:01.308) 0:00:11.399 ********* 2025-08-29 21:01:39.170221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.170247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.170368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.170380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.170393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.170427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.170440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.170472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.170484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.170496 | orchestrator | 2025-08-29 21:01:39.170507 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-08-29 21:01:39.170519 | orchestrator | Friday 29 August 2025 20:55:51 +0000 (0:00:02.665) 0:00:14.065 ********* 2025-08-29 21:01:39.170530 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.170541 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.170552 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.170564 | orchestrator | 2025-08-29 21:01:39.170575 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-08-29 21:01:39.170585 | orchestrator | Friday 29 August 2025 20:55:52 +0000 (0:00:01.587) 0:00:15.652 ********* 2025-08-29 21:01:39.170596 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-08-29 21:01:39.170608 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-08-29 21:01:39.170619 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-08-29 21:01:39.170630 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-08-29 21:01:39.170641 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-08-29 21:01:39.170654 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-08-29 21:01:39.170674 | orchestrator | 2025-08-29 21:01:39.170687 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-08-29 21:01:39.170700 | orchestrator | Friday 29 August 2025 20:55:55 +0000 (0:00:02.532) 0:00:18.185 ********* 2025-08-29 21:01:39.170712 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.170724 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.170737 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.170750 | orchestrator | 2025-08-29 21:01:39.170789 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-08-29 21:01:39.170803 | orchestrator | Friday 29 August 2025 20:55:56 +0000 (0:00:01.610) 0:00:19.796 ********* 2025-08-29 21:01:39.170815 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.170827 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.170840 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.170852 | orchestrator | 2025-08-29 21:01:39.170864 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-08-29 21:01:39.170877 | orchestrator | Friday 29 August 2025 20:55:57 +0000 (0:00:01.080) 0:00:20.877 ********* 2025-08-29 21:01:39.170890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.170912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.171001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.171039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 21:01:39.171052 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.171064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.171084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.171096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.171115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 21:01:39.171193 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.171237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.171255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.171267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.171290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 21:01:39.171302 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.171313 | orchestrator | 2025-08-29 21:01:39.171324 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-08-29 21:01:39.171335 | orchestrator | Friday 29 August 2025 20:55:58 +0000 (0:00:00.748) 0:00:21.625 ********* 2025-08-29 21:01:39.171347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.171424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 21:01:39.171436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.171466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 21:01:39.171478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.171573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345', '__omit_place_holder__eb640969a2068577c47a9451fe89dce69a04a345'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 21:01:39.171585 | orchestrator | 2025-08-29 21:01:39.171596 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-08-29 21:01:39.171607 | orchestrator | Friday 29 August 2025 20:56:02 +0000 (0:00:03.320) 0:00:24.946 ********* 2025-08-29 21:01:39.171619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.171715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.171726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.171737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.171813 | orchestrator | 2025-08-29 21:01:39.171827 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-08-29 21:01:39.171838 | orchestrator | Friday 29 August 2025 20:56:05 +0000 (0:00:03.642) 0:00:28.588 ********* 2025-08-29 21:01:39.171849 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 21:01:39.171867 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 21:01:39.171879 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 21:01:39.171890 | orchestrator | 2025-08-29 21:01:39.171901 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-08-29 21:01:39.171949 | orchestrator | Friday 29 August 2025 20:56:08 +0000 (0:00:02.926) 0:00:31.515 ********* 2025-08-29 21:01:39.171960 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 21:01:39.171972 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 21:01:39.171992 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 21:01:39.172003 | orchestrator | 2025-08-29 21:01:39.172014 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-08-29 21:01:39.172025 | orchestrator | Friday 29 August 2025 20:56:14 +0000 (0:00:05.703) 0:00:37.218 ********* 2025-08-29 21:01:39.172036 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.172046 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.172057 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.172068 | orchestrator | 2025-08-29 21:01:39.172079 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-08-29 21:01:39.172095 | orchestrator | Friday 29 August 2025 20:56:14 +0000 (0:00:00.568) 0:00:37.787 ********* 2025-08-29 21:01:39.172106 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 21:01:39.172119 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 21:01:39.172129 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 21:01:39.172140 | orchestrator | 2025-08-29 21:01:39.172151 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-08-29 21:01:39.172163 | orchestrator | Friday 29 August 2025 20:56:17 +0000 (0:00:02.797) 0:00:40.584 ********* 2025-08-29 21:01:39.172174 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 21:01:39.172185 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 21:01:39.172195 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 21:01:39.172226 | orchestrator | 2025-08-29 21:01:39.172237 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-08-29 21:01:39.172248 | orchestrator | Friday 29 August 2025 20:56:20 +0000 (0:00:03.125) 0:00:43.710 ********* 2025-08-29 21:01:39.172259 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-08-29 21:01:39.172270 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-08-29 21:01:39.172340 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-08-29 21:01:39.172353 | orchestrator | 2025-08-29 21:01:39.172392 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-08-29 21:01:39.172403 | orchestrator | Friday 29 August 2025 20:56:23 +0000 (0:00:02.469) 0:00:46.180 ********* 2025-08-29 21:01:39.172414 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-08-29 21:01:39.172435 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-08-29 21:01:39.172446 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-08-29 21:01:39.172457 | orchestrator | 2025-08-29 21:01:39.172468 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 21:01:39.172479 | orchestrator | Friday 29 August 2025 20:56:25 +0000 (0:00:01.771) 0:00:47.951 ********* 2025-08-29 21:01:39.172490 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.172501 | orchestrator | 2025-08-29 21:01:39.172511 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-08-29 21:01:39.172523 | orchestrator | Friday 29 August 2025 20:56:25 +0000 (0:00:00.662) 0:00:48.614 ********* 2025-08-29 21:01:39.172534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.172642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.172657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.172674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.172686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.172697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.172709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.172727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.172746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.172758 | orchestrator | 2025-08-29 21:01:39.172769 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-08-29 21:01:39.172780 | orchestrator | Friday 29 August 2025 20:56:29 +0000 (0:00:03.512) 0:00:52.126 ********* 2025-08-29 21:01:39.172797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.172809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.172821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.172832 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.172854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.172873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.172891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.172903 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.172914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.172930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.172942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.172953 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.172964 | orchestrator | 2025-08-29 21:01:39.172975 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-08-29 21:01:39.172986 | orchestrator | Friday 29 August 2025 20:56:29 +0000 (0:00:00.513) 0:00:52.640 ********* 2025-08-29 21:01:39.173046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.173065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.173083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.173095 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.173106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.173122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.173134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.173146 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.173157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.173175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.173186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.173197 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.173312 | orchestrator | 2025-08-29 21:01:39.173325 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 21:01:39.173337 | orchestrator | Friday 29 August 2025 20:56:30 +0000 (0:00:00.928) 0:00:53.568 ********* 2025-08-29 21:01:39.173357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.173376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.173389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.173401 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.173413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.173434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.173447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.173459 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.173477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.173489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.173510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.173522 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.173534 | orchestrator | 2025-08-29 21:01:39.173546 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 21:01:39.173557 | orchestrator | Friday 29 August 2025 20:56:31 +0000 (0:00:00.539) 0:00:54.108 ********* 2025-08-29 21:01:39.173570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.173588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.173600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.173612 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.173624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.173644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.173662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.173674 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.173686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.173703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.173794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.173806 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.173816 | orchestrator | 2025-08-29 21:01:39.173826 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 21:01:39.173837 | orchestrator | Friday 29 August 2025 20:56:31 +0000 (0:00:00.611) 0:00:54.720 ********* 2025-08-29 21:01:39.173847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.173866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.173877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.173887 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.173903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.173921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.173932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.173942 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.173953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.178294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.178392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.178410 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.178425 | orchestrator | 2025-08-29 21:01:39.178437 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-08-29 21:01:39.178449 | orchestrator | Friday 29 August 2025 20:56:32 +0000 (0:00:00.763) 0:00:55.483 ********* 2025-08-29 21:01:39.178481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.178514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.178527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.178538 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.178550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.178562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.178604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.178618 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.178629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.178655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.178667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.178678 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.178706 | orchestrator | 2025-08-29 21:01:39.178733 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-08-29 21:01:39.178745 | orchestrator | Friday 29 August 2025 20:56:33 +0000 (0:00:00.522) 0:00:56.005 ********* 2025-08-29 21:01:39.178757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.178769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.178792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.178804 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.178816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.178840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.178852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.178863 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.178874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.178886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.178898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.178909 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.178920 | orchestrator | 2025-08-29 21:01:39.178931 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-08-29 21:01:39.178949 | orchestrator | Friday 29 August 2025 20:56:33 +0000 (0:00:00.518) 0:00:56.523 ********* 2025-08-29 21:01:39.178961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.178983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.178995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.179007 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.179018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.179030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.179042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.179053 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.179070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 21:01:39.179088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 21:01:39.179100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 21:01:39.179111 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.179122 | orchestrator | 2025-08-29 21:01:39.179133 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-08-29 21:01:39.179171 | orchestrator | Friday 29 August 2025 20:56:34 +0000 (0:00:01.013) 0:00:57.537 ********* 2025-08-29 21:01:39.179184 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 21:01:39.179195 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 21:01:39.179228 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 21:01:39.179239 | orchestrator | 2025-08-29 21:01:39.179251 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-08-29 21:01:39.179261 | orchestrator | Friday 29 August 2025 20:56:36 +0000 (0:00:01.801) 0:00:59.338 ********* 2025-08-29 21:01:39.179272 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 21:01:39.179283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 21:01:39.179294 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 21:01:39.179305 | orchestrator | 2025-08-29 21:01:39.179316 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-08-29 21:01:39.179327 | orchestrator | Friday 29 August 2025 20:56:38 +0000 (0:00:01.695) 0:01:01.033 ********* 2025-08-29 21:01:39.179338 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 21:01:39.179349 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 21:01:39.179360 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 21:01:39.179371 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 21:01:39.179382 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.179393 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 21:01:39.179410 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.179421 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 21:01:39.179432 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.179443 | orchestrator | 2025-08-29 21:01:39.179454 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-08-29 21:01:39.179465 | orchestrator | Friday 29 August 2025 20:56:38 +0000 (0:00:00.820) 0:01:01.854 ********* 2025-08-29 21:01:39.179484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.179497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.179513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 21:01:39.179525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.179537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.179548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 21:01:39.179566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.179584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.179597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 21:01:39.179608 | orchestrator | 2025-08-29 21:01:39.179620 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-08-29 21:01:39.179631 | orchestrator | Friday 29 August 2025 20:56:41 +0000 (0:00:02.625) 0:01:04.479 ********* 2025-08-29 21:01:39.179646 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.179658 | orchestrator | 2025-08-29 21:01:39.179669 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-08-29 21:01:39.179680 | orchestrator | Friday 29 August 2025 20:56:42 +0000 (0:00:00.485) 0:01:04.965 ********* 2025-08-29 21:01:39.179693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 21:01:39.179706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.179730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.179742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.179761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 21:01:39.179778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.179790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.179801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 21:01:39.179820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.179831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.179849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.179861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.179872 | orchestrator | 2025-08-29 21:01:39.179888 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-08-29 21:01:39.179899 | orchestrator | Friday 29 August 2025 20:56:45 +0000 (0:00:03.300) 0:01:08.266 ********* 2025-08-29 21:01:39.179911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 21:01:39.179923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.179940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.179952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.179964 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.179982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 21:01:39.179999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.180011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180042 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.180054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 21:01:39.180065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.180082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180106 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.180117 | orchestrator | 2025-08-29 21:01:39.180132 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-08-29 21:01:39.180143 | orchestrator | Friday 29 August 2025 20:56:45 +0000 (0:00:00.655) 0:01:08.921 ********* 2025-08-29 21:01:39.180155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 21:01:39.180167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 21:01:39.180185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 21:01:39.180198 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.180227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 21:01:39.180239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 21:01:39.180250 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.180261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 21:01:39.180272 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.180283 | orchestrator | 2025-08-29 21:01:39.180294 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-08-29 21:01:39.180305 | orchestrator | Friday 29 August 2025 20:56:46 +0000 (0:00:00.767) 0:01:09.689 ********* 2025-08-29 21:01:39.180316 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.180327 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.180337 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.180348 | orchestrator | 2025-08-29 21:01:39.180359 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-08-29 21:01:39.180370 | orchestrator | Friday 29 August 2025 20:56:48 +0000 (0:00:01.490) 0:01:11.179 ********* 2025-08-29 21:01:39.180381 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.180392 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.180403 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.180413 | orchestrator | 2025-08-29 21:01:39.180424 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-08-29 21:01:39.180435 | orchestrator | Friday 29 August 2025 20:56:50 +0000 (0:00:01.812) 0:01:12.992 ********* 2025-08-29 21:01:39.180446 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.180457 | orchestrator | 2025-08-29 21:01:39.180468 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-08-29 21:01:39.180478 | orchestrator | Friday 29 August 2025 20:56:50 +0000 (0:00:00.666) 0:01:13.658 ********* 2025-08-29 21:01:39.180498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.180512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.180559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.180614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180637 | orchestrator | 2025-08-29 21:01:39.180649 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-08-29 21:01:39.180660 | orchestrator | Friday 29 August 2025 20:56:54 +0000 (0:00:03.802) 0:01:17.461 ********* 2025-08-29 21:01:39.180672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.180683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180713 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.180729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.180747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.180771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180783 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.180800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.180833 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.180845 | orchestrator | 2025-08-29 21:01:39.180856 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-08-29 21:01:39.180867 | orchestrator | Friday 29 August 2025 20:56:55 +0000 (0:00:01.020) 0:01:18.482 ********* 2025-08-29 21:01:39.180878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 21:01:39.180894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 21:01:39.180906 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.180918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 21:01:39.180929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 21:01:39.180940 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.180951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 21:01:39.180963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 21:01:39.180974 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.180985 | orchestrator | 2025-08-29 21:01:39.180996 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-08-29 21:01:39.181008 | orchestrator | Friday 29 August 2025 20:56:56 +0000 (0:00:00.788) 0:01:19.270 ********* 2025-08-29 21:01:39.181019 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.181030 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.181041 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.181052 | orchestrator | 2025-08-29 21:01:39.181063 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-08-29 21:01:39.181074 | orchestrator | Friday 29 August 2025 20:56:57 +0000 (0:00:01.493) 0:01:20.763 ********* 2025-08-29 21:01:39.181085 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.181096 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.181107 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.181118 | orchestrator | 2025-08-29 21:01:39.181129 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-08-29 21:01:39.181140 | orchestrator | Friday 29 August 2025 20:57:00 +0000 (0:00:02.437) 0:01:23.201 ********* 2025-08-29 21:01:39.181151 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.181162 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.181173 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.181184 | orchestrator | 2025-08-29 21:01:39.181195 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-08-29 21:01:39.181225 | orchestrator | Friday 29 August 2025 20:57:00 +0000 (0:00:00.500) 0:01:23.701 ********* 2025-08-29 21:01:39.181243 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.181254 | orchestrator | 2025-08-29 21:01:39.181265 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-08-29 21:01:39.181276 | orchestrator | Friday 29 August 2025 20:57:01 +0000 (0:00:00.597) 0:01:24.298 ********* 2025-08-29 21:01:39.181296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 21:01:39.181309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 21:01:39.181326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 21:01:39.181337 | orchestrator | 2025-08-29 21:01:39.181349 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-08-29 21:01:39.181360 | orchestrator | Friday 29 August 2025 20:57:03 +0000 (0:00:02.371) 0:01:26.670 ********* 2025-08-29 21:01:39.181371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 21:01:39.181388 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.181400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 21:01:39.181411 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.181429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 21:01:39.181441 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.181452 | orchestrator | 2025-08-29 21:01:39.181463 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-08-29 21:01:39.181474 | orchestrator | Friday 29 August 2025 20:57:05 +0000 (0:00:01.508) 0:01:28.178 ********* 2025-08-29 21:01:39.181491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 21:01:39.181504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 21:01:39.181517 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.181529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 21:01:39.181540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 21:01:39.181551 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.181568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 21:01:39.181580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 21:01:39.181591 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.181602 | orchestrator | 2025-08-29 21:01:39.181613 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-08-29 21:01:39.181624 | orchestrator | Friday 29 August 2025 20:57:06 +0000 (0:00:01.416) 0:01:29.594 ********* 2025-08-29 21:01:39.181635 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.181646 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.181657 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.181668 | orchestrator | 2025-08-29 21:01:39.181679 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-08-29 21:01:39.181690 | orchestrator | Friday 29 August 2025 20:57:07 +0000 (0:00:00.377) 0:01:29.972 ********* 2025-08-29 21:01:39.181701 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.181711 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.181722 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.181733 | orchestrator | 2025-08-29 21:01:39.181744 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-08-29 21:01:39.181965 | orchestrator | Friday 29 August 2025 20:57:08 +0000 (0:00:01.194) 0:01:31.167 ********* 2025-08-29 21:01:39.181986 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.181997 | orchestrator | 2025-08-29 21:01:39.182008 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-08-29 21:01:39.182052 | orchestrator | Friday 29 August 2025 20:57:09 +0000 (0:00:00.764) 0:01:31.931 ********* 2025-08-29 21:01:39.182074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.182088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.182155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.182238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182285 | orchestrator | 2025-08-29 21:01:39.182296 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-08-29 21:01:39.182314 | orchestrator | Friday 29 August 2025 20:57:12 +0000 (0:00:03.094) 0:01:35.026 ********* 2025-08-29 21:01:39.182326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.182338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182378 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.182395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.182413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182448 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.182465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.182486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.182528 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.182541 | orchestrator | 2025-08-29 21:01:39.182553 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-08-29 21:01:39.182566 | orchestrator | Friday 29 August 2025 20:57:12 +0000 (0:00:00.593) 0:01:35.619 ********* 2025-08-29 21:01:39.182579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 21:01:39.182592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 21:01:39.182605 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.182618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 21:01:39.182630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 21:01:39.182643 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.182662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 21:01:39.182675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 21:01:39.182689 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.182701 | orchestrator | 2025-08-29 21:01:39.182712 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-08-29 21:01:39.182723 | orchestrator | Friday 29 August 2025 20:57:13 +0000 (0:00:01.173) 0:01:36.792 ********* 2025-08-29 21:01:39.182734 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.182745 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.182762 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.182773 | orchestrator | 2025-08-29 21:01:39.182784 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-08-29 21:01:39.182795 | orchestrator | Friday 29 August 2025 20:57:15 +0000 (0:00:01.392) 0:01:38.184 ********* 2025-08-29 21:01:39.182806 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.182817 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.182828 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.182838 | orchestrator | 2025-08-29 21:01:39.182849 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-08-29 21:01:39.182860 | orchestrator | Friday 29 August 2025 20:57:17 +0000 (0:00:02.054) 0:01:40.239 ********* 2025-08-29 21:01:39.182871 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.182882 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.182898 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.182909 | orchestrator | 2025-08-29 21:01:39.182920 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-08-29 21:01:39.182931 | orchestrator | Friday 29 August 2025 20:57:17 +0000 (0:00:00.289) 0:01:40.528 ********* 2025-08-29 21:01:39.182942 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.182953 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.182964 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.182975 | orchestrator | 2025-08-29 21:01:39.182986 | orchestrator | TASK [include_role : designate] ************************************************ 2025-08-29 21:01:39.182997 | orchestrator | Friday 29 August 2025 20:57:18 +0000 (0:00:00.461) 0:01:40.990 ********* 2025-08-29 21:01:39.183007 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.183018 | orchestrator | 2025-08-29 21:01:39.183029 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-08-29 21:01:39.183040 | orchestrator | Friday 29 August 2025 20:57:18 +0000 (0:00:00.748) 0:01:41.738 ********* 2025-08-29 21:01:39.183052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:01:39.183064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:01:39.183076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:01:39.183165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:01:39.183188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:01:39.183255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:01:39.183297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183402 | orchestrator | 2025-08-29 21:01:39.183413 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-08-29 21:01:39.183425 | orchestrator | Friday 29 August 2025 20:57:22 +0000 (0:00:03.800) 0:01:45.539 ********* 2025-08-29 21:01:39.183442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:01:39.183454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:01:39.183470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:01:39.183529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:01:39.183557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183569 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.183581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:01:39.183639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:01:39.183670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183694 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.183705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.183763 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.183774 | orchestrator | 2025-08-29 21:01:39.183785 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-08-29 21:01:39.183801 | orchestrator | Friday 29 August 2025 20:57:23 +0000 (0:00:00.972) 0:01:46.512 ********* 2025-08-29 21:01:39.183812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 21:01:39.183823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 21:01:39.183834 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.183845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 21:01:39.183856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 21:01:39.183867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 21:01:39.183878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 21:01:39.183895 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.183906 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.183917 | orchestrator | 2025-08-29 21:01:39.183928 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-08-29 21:01:39.183939 | orchestrator | Friday 29 August 2025 20:57:24 +0000 (0:00:00.865) 0:01:47.377 ********* 2025-08-29 21:01:39.183950 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.183961 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.183971 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.183982 | orchestrator | 2025-08-29 21:01:39.183993 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-08-29 21:01:39.184004 | orchestrator | Friday 29 August 2025 20:57:25 +0000 (0:00:01.298) 0:01:48.675 ********* 2025-08-29 21:01:39.184014 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.184025 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.184036 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.184047 | orchestrator | 2025-08-29 21:01:39.184058 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-08-29 21:01:39.184069 | orchestrator | Friday 29 August 2025 20:57:27 +0000 (0:00:01.921) 0:01:50.597 ********* 2025-08-29 21:01:39.184080 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.184090 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.184101 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.184112 | orchestrator | 2025-08-29 21:01:39.184123 | orchestrator | TASK [include_role : glance] *************************************************** 2025-08-29 21:01:39.184134 | orchestrator | Friday 29 August 2025 20:57:28 +0000 (0:00:00.393) 0:01:50.991 ********* 2025-08-29 21:01:39.184145 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.184155 | orchestrator | 2025-08-29 21:01:39.184166 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-08-29 21:01:39.184177 | orchestrator | Friday 29 August 2025 20:57:28 +0000 (0:00:00.707) 0:01:51.698 ********* 2025-08-29 21:01:39.184255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:01:39.184273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.184301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:01:39.184320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.184350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:01:39.184369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.184388 | orchestrator | 2025-08-29 21:01:39.184400 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-08-29 21:01:39.184411 | orchestrator | Friday 29 August 2025 20:57:33 +0000 (0:00:04.348) 0:01:56.046 ********* 2025-08-29 21:01:39.184598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 21:01:39.184624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.184646 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.184659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 21:01:39.184749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.184774 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.184785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 21:01:39.184862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.184888 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.184899 | orchestrator | 2025-08-29 21:01:39.184909 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-08-29 21:01:39.184919 | orchestrator | Friday 29 August 2025 20:57:36 +0000 (0:00:03.436) 0:01:59.483 ********* 2025-08-29 21:01:39.184929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 21:01:39.184940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 21:01:39.184950 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.184961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 21:01:39.184971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 21:01:39.184981 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.184991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 21:01:39.185073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 21:01:39.185088 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.185099 | orchestrator | 2025-08-29 21:01:39.185109 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-08-29 21:01:39.185126 | orchestrator | Friday 29 August 2025 20:57:39 +0000 (0:00:02.620) 0:02:02.103 ********* 2025-08-29 21:01:39.185136 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.185145 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.185155 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.185165 | orchestrator | 2025-08-29 21:01:39.185175 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-08-29 21:01:39.185184 | orchestrator | Friday 29 August 2025 20:57:40 +0000 (0:00:01.237) 0:02:03.341 ********* 2025-08-29 21:01:39.185194 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.185221 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.185231 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.185241 | orchestrator | 2025-08-29 21:01:39.185251 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-08-29 21:01:39.185265 | orchestrator | Friday 29 August 2025 20:57:42 +0000 (0:00:01.912) 0:02:05.253 ********* 2025-08-29 21:01:39.185275 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.185285 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.185295 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.185304 | orchestrator | 2025-08-29 21:01:39.185314 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-08-29 21:01:39.185323 | orchestrator | Friday 29 August 2025 20:57:42 +0000 (0:00:00.468) 0:02:05.721 ********* 2025-08-29 21:01:39.185333 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.185343 | orchestrator | 2025-08-29 21:01:39.185352 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-08-29 21:01:39.185362 | orchestrator | Friday 29 August 2025 20:57:43 +0000 (0:00:00.854) 0:02:06.575 ********* 2025-08-29 21:01:39.185373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:01:39.185384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:01:39.185395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:01:39.185405 | orchestrator | 2025-08-29 21:01:39.185415 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-08-29 21:01:39.185431 | orchestrator | Friday 29 August 2025 20:57:46 +0000 (0:00:02.976) 0:02:09.552 ********* 2025-08-29 21:01:39.185505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 21:01:39.185525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 21:01:39.185536 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.185546 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.185556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 21:01:39.185566 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.185575 | orchestrator | 2025-08-29 21:01:39.185585 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-08-29 21:01:39.185595 | orchestrator | Friday 29 August 2025 20:57:47 +0000 (0:00:00.624) 0:02:10.176 ********* 2025-08-29 21:01:39.185604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 21:01:39.185615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 21:01:39.185624 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.185634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 21:01:39.185644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 21:01:39.185654 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.185664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 21:01:39.185674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 21:01:39.185690 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.185700 | orchestrator | 2025-08-29 21:01:39.185709 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-08-29 21:01:39.185719 | orchestrator | Friday 29 August 2025 20:57:47 +0000 (0:00:00.606) 0:02:10.783 ********* 2025-08-29 21:01:39.185729 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.185738 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.185748 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.185757 | orchestrator | 2025-08-29 21:01:39.185767 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-08-29 21:01:39.185777 | orchestrator | Friday 29 August 2025 20:57:49 +0000 (0:00:01.400) 0:02:12.184 ********* 2025-08-29 21:01:39.185786 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.185796 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.185806 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.185815 | orchestrator | 2025-08-29 21:01:39.185885 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-08-29 21:01:39.185898 | orchestrator | Friday 29 August 2025 20:57:51 +0000 (0:00:01.981) 0:02:14.165 ********* 2025-08-29 21:01:39.185908 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.185918 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.185927 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.185937 | orchestrator | 2025-08-29 21:01:39.185946 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-08-29 21:01:39.185956 | orchestrator | Friday 29 August 2025 20:57:51 +0000 (0:00:00.484) 0:02:14.649 ********* 2025-08-29 21:01:39.185965 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.185975 | orchestrator | 2025-08-29 21:01:39.185984 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-08-29 21:01:39.185994 | orchestrator | Friday 29 August 2025 20:57:52 +0000 (0:00:00.870) 0:02:15.520 ********* 2025-08-29 21:01:39.186011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:01:39.186125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:01:39.186148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:01:39.186166 | orchestrator | 2025-08-29 21:01:39.186175 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-08-29 21:01:39.186185 | orchestrator | Friday 29 August 2025 20:57:56 +0000 (0:00:03.494) 0:02:19.014 ********* 2025-08-29 21:01:39.186320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 21:01:39.186338 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.186349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 21:01:39.186367 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.186445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 21:01:39.186460 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.186470 | orchestrator | 2025-08-29 21:01:39.186480 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-08-29 21:01:39.186490 | orchestrator | Friday 29 August 2025 20:57:56 +0000 (0:00:00.872) 0:02:19.887 ********* 2025-08-29 21:01:39.186500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 21:01:39.186518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 21:01:39.186529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 21:01:39.186540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 21:01:39.186550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 21:01:39.186560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 21:01:39.186571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 21:01:39.186581 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.186651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 21:01:39.186665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 21:01:39.186675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 21:01:39.186685 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.186700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 21:01:39.186710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 21:01:39.186720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 21:01:39.186741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 21:01:39.186751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 21:01:39.186760 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.186770 | orchestrator | 2025-08-29 21:01:39.186779 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-08-29 21:01:39.186787 | orchestrator | Friday 29 August 2025 20:57:57 +0000 (0:00:00.942) 0:02:20.830 ********* 2025-08-29 21:01:39.186794 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.186802 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.186810 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.186818 | orchestrator | 2025-08-29 21:01:39.186826 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-08-29 21:01:39.186834 | orchestrator | Friday 29 August 2025 20:57:59 +0000 (0:00:01.327) 0:02:22.158 ********* 2025-08-29 21:01:39.186841 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.186849 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.186857 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.186865 | orchestrator | 2025-08-29 21:01:39.186873 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-08-29 21:01:39.186881 | orchestrator | Friday 29 August 2025 20:58:01 +0000 (0:00:02.071) 0:02:24.229 ********* 2025-08-29 21:01:39.186889 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.186896 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.186904 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.186912 | orchestrator | 2025-08-29 21:01:39.186920 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-08-29 21:01:39.186928 | orchestrator | Friday 29 August 2025 20:58:01 +0000 (0:00:00.505) 0:02:24.735 ********* 2025-08-29 21:01:39.186935 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.186943 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.186951 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.186959 | orchestrator | 2025-08-29 21:01:39.186967 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-08-29 21:01:39.186975 | orchestrator | Friday 29 August 2025 20:58:02 +0000 (0:00:00.299) 0:02:25.035 ********* 2025-08-29 21:01:39.186982 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.186990 | orchestrator | 2025-08-29 21:01:39.186998 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-08-29 21:01:39.187006 | orchestrator | Friday 29 August 2025 20:58:02 +0000 (0:00:00.885) 0:02:25.920 ********* 2025-08-29 21:01:39.187063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:01:39.187085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:01:39.187095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:01:39.187104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:01:39.187113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:01:39.187171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:01:39.187187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:01:39.187215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:01:39.187224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:01:39.187233 | orchestrator | 2025-08-29 21:01:39.187241 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-08-29 21:01:39.187249 | orchestrator | Friday 29 August 2025 20:58:06 +0000 (0:00:03.554) 0:02:29.474 ********* 2025-08-29 21:01:39.187257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 21:01:39.187316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:01:39.187334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:01:39.187342 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.187355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 21:01:39.187364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:01:39.187373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:01:39.187381 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.187437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 21:01:39.187457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:01:39.187469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:01:39.187478 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.187486 | orchestrator | 2025-08-29 21:01:39.187494 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-08-29 21:01:39.187502 | orchestrator | Friday 29 August 2025 20:58:07 +0000 (0:00:00.613) 0:02:30.088 ********* 2025-08-29 21:01:39.187510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 21:01:39.187518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 21:01:39.187526 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.187535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 21:01:39.187543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 21:01:39.187551 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.187559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 21:01:39.187567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 21:01:39.187575 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.187583 | orchestrator | 2025-08-29 21:01:39.187591 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-08-29 21:01:39.187599 | orchestrator | Friday 29 August 2025 20:58:07 +0000 (0:00:00.769) 0:02:30.857 ********* 2025-08-29 21:01:39.187607 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.187621 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.187629 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.187636 | orchestrator | 2025-08-29 21:01:39.187644 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-08-29 21:01:39.187652 | orchestrator | Friday 29 August 2025 20:58:09 +0000 (0:00:01.589) 0:02:32.447 ********* 2025-08-29 21:01:39.187660 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.187668 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.187676 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.187684 | orchestrator | 2025-08-29 21:01:39.187691 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-08-29 21:01:39.187749 | orchestrator | Friday 29 August 2025 20:58:11 +0000 (0:00:02.146) 0:02:34.593 ********* 2025-08-29 21:01:39.187760 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.187768 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.187776 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.187784 | orchestrator | 2025-08-29 21:01:39.187792 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-08-29 21:01:39.187800 | orchestrator | Friday 29 August 2025 20:58:11 +0000 (0:00:00.325) 0:02:34.918 ********* 2025-08-29 21:01:39.187807 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.187815 | orchestrator | 2025-08-29 21:01:39.187823 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-08-29 21:01:39.187831 | orchestrator | Friday 29 August 2025 20:58:12 +0000 (0:00:00.976) 0:02:35.895 ********* 2025-08-29 21:01:39.187839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:01:39.187848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.187872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:01:39.187886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.187946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:01:39.187961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.187970 | orchestrator | 2025-08-29 21:01:39.187978 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-08-29 21:01:39.187986 | orchestrator | Friday 29 August 2025 20:58:17 +0000 (0:00:04.201) 0:02:40.096 ********* 2025-08-29 21:01:39.187995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:01:39.188003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188017 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.188073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:01:39.188084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188092 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.188105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:01:39.188114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188131 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.188139 | orchestrator | 2025-08-29 21:01:39.188147 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-08-29 21:01:39.188155 | orchestrator | Friday 29 August 2025 20:58:18 +0000 (0:00:00.954) 0:02:41.050 ********* 2025-08-29 21:01:39.188163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 21:01:39.188171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 21:01:39.188179 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.188187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 21:01:39.188196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 21:01:39.188217 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.188226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 21:01:39.188234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 21:01:39.188294 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.188305 | orchestrator | 2025-08-29 21:01:39.188313 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-08-29 21:01:39.188321 | orchestrator | Friday 29 August 2025 20:58:18 +0000 (0:00:00.878) 0:02:41.929 ********* 2025-08-29 21:01:39.188329 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.188337 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.188345 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.188353 | orchestrator | 2025-08-29 21:01:39.188360 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-08-29 21:01:39.188368 | orchestrator | Friday 29 August 2025 20:58:20 +0000 (0:00:01.636) 0:02:43.566 ********* 2025-08-29 21:01:39.188376 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.188384 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.188392 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.188400 | orchestrator | 2025-08-29 21:01:39.188408 | orchestrator | TASK [include_role : manila] *************************************************** 2025-08-29 21:01:39.188415 | orchestrator | Friday 29 August 2025 20:58:22 +0000 (0:00:01.977) 0:02:45.543 ********* 2025-08-29 21:01:39.188423 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.188431 | orchestrator | 2025-08-29 21:01:39.188439 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-08-29 21:01:39.188447 | orchestrator | Friday 29 August 2025 20:58:23 +0000 (0:00:01.001) 0:02:46.545 ********* 2025-08-29 21:01:39.188459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 21:01:39.188474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 21:01:39.188562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 21:01:39.188596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188683 | orchestrator | 2025-08-29 21:01:39.188695 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-08-29 21:01:39.188708 | orchestrator | Friday 29 August 2025 20:58:27 +0000 (0:00:04.282) 0:02:50.828 ********* 2025-08-29 21:01:39.188717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 21:01:39.188725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188750 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.188809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 21:01:39.188825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188855 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.188864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 21:01:39.188921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.188959 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.188967 | orchestrator | 2025-08-29 21:01:39.188975 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-08-29 21:01:39.188983 | orchestrator | Friday 29 August 2025 20:58:28 +0000 (0:00:00.797) 0:02:51.625 ********* 2025-08-29 21:01:39.188992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 21:01:39.189000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 21:01:39.189008 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.189016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 21:01:39.189024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 21:01:39.189032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 21:01:39.189040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 21:01:39.189048 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.189056 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.189064 | orchestrator | 2025-08-29 21:01:39.189072 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-08-29 21:01:39.189080 | orchestrator | Friday 29 August 2025 20:58:29 +0000 (0:00:00.760) 0:02:52.386 ********* 2025-08-29 21:01:39.189087 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.189095 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.189103 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.189111 | orchestrator | 2025-08-29 21:01:39.189119 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-08-29 21:01:39.189127 | orchestrator | Friday 29 August 2025 20:58:30 +0000 (0:00:01.275) 0:02:53.661 ********* 2025-08-29 21:01:39.189134 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.189142 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.189150 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.189158 | orchestrator | 2025-08-29 21:01:39.189166 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-08-29 21:01:39.189174 | orchestrator | Friday 29 August 2025 20:58:32 +0000 (0:00:01.960) 0:02:55.621 ********* 2025-08-29 21:01:39.189181 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.189189 | orchestrator | 2025-08-29 21:01:39.189197 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-08-29 21:01:39.189250 | orchestrator | Friday 29 August 2025 20:58:33 +0000 (0:00:01.294) 0:02:56.916 ********* 2025-08-29 21:01:39.189259 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 21:01:39.189272 | orchestrator | 2025-08-29 21:01:39.189280 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-08-29 21:01:39.189289 | orchestrator | Friday 29 August 2025 20:58:36 +0000 (0:00:02.879) 0:02:59.796 ********* 2025-08-29 21:01:39.189360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:01:39.189373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 21:01:39.189382 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.189440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:01:39.189458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 21:01:39.189467 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.189479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:01:39.189489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 21:01:39.189497 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.189505 | orchestrator | 2025-08-29 21:01:39.189513 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-08-29 21:01:39.189528 | orchestrator | Friday 29 August 2025 20:58:39 +0000 (0:00:02.189) 0:03:01.985 ********* 2025-08-29 21:01:39.189590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:01:39.189601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 21:01:39.189608 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.189615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:01:39.189671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 21:01:39.189681 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.189692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:01:39.189700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 21:01:39.189707 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.189714 | orchestrator | 2025-08-29 21:01:39.189721 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-08-29 21:01:39.189727 | orchestrator | Friday 29 August 2025 20:58:41 +0000 (0:00:02.351) 0:03:04.337 ********* 2025-08-29 21:01:39.189734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 21:01:39.189790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 21:01:39.189800 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.189807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 21:01:39.189818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 21:01:39.189825 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.189833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 21:01:39.189840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 21:01:39.189847 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.189853 | orchestrator | 2025-08-29 21:01:39.189860 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-08-29 21:01:39.189867 | orchestrator | Friday 29 August 2025 20:58:43 +0000 (0:00:02.319) 0:03:06.656 ********* 2025-08-29 21:01:39.189879 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.189885 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.189892 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.189898 | orchestrator | 2025-08-29 21:01:39.189905 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-08-29 21:01:39.189912 | orchestrator | Friday 29 August 2025 20:58:45 +0000 (0:00:01.939) 0:03:08.596 ********* 2025-08-29 21:01:39.189919 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.189925 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.189932 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.189938 | orchestrator | 2025-08-29 21:01:39.189945 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-08-29 21:01:39.189952 | orchestrator | Friday 29 August 2025 20:58:47 +0000 (0:00:01.440) 0:03:10.037 ********* 2025-08-29 21:01:39.189958 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.189965 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.189971 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.189978 | orchestrator | 2025-08-29 21:01:39.189984 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-08-29 21:01:39.189991 | orchestrator | Friday 29 August 2025 20:58:47 +0000 (0:00:00.406) 0:03:10.443 ********* 2025-08-29 21:01:39.189998 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.190004 | orchestrator | 2025-08-29 21:01:39.190011 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-08-29 21:01:39.190037 | orchestrator | Friday 29 August 2025 20:58:48 +0000 (0:00:01.078) 0:03:11.521 ********* 2025-08-29 21:01:39.190092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 21:01:39.190107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 21:01:39.190114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 21:01:39.190129 | orchestrator | 2025-08-29 21:01:39.190136 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-08-29 21:01:39.190143 | orchestrator | Friday 29 August 2025 20:58:50 +0000 (0:00:01.450) 0:03:12.972 ********* 2025-08-29 21:01:39.190150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 21:01:39.190157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 21:01:39.190164 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.190171 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.190234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 21:01:39.190245 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.190252 | orchestrator | 2025-08-29 21:01:39.190258 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-08-29 21:01:39.190265 | orchestrator | Friday 29 August 2025 20:58:50 +0000 (0:00:00.541) 0:03:13.513 ********* 2025-08-29 21:01:39.190272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 21:01:39.190279 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.190289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 21:01:39.190296 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.190303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 21:01:39.190316 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.190323 | orchestrator | 2025-08-29 21:01:39.190330 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-08-29 21:01:39.190336 | orchestrator | Friday 29 August 2025 20:58:51 +0000 (0:00:00.566) 0:03:14.080 ********* 2025-08-29 21:01:39.190343 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.190350 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.190356 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.190363 | orchestrator | 2025-08-29 21:01:39.190370 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-08-29 21:01:39.190377 | orchestrator | Friday 29 August 2025 20:58:51 +0000 (0:00:00.360) 0:03:14.441 ********* 2025-08-29 21:01:39.190383 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.190390 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.190396 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.190403 | orchestrator | 2025-08-29 21:01:39.190410 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-08-29 21:01:39.190416 | orchestrator | Friday 29 August 2025 20:58:52 +0000 (0:00:01.130) 0:03:15.571 ********* 2025-08-29 21:01:39.190423 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.190429 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.190436 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.190443 | orchestrator | 2025-08-29 21:01:39.190450 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-08-29 21:01:39.190456 | orchestrator | Friday 29 August 2025 20:58:53 +0000 (0:00:00.420) 0:03:15.992 ********* 2025-08-29 21:01:39.190463 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.190469 | orchestrator | 2025-08-29 21:01:39.190476 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-08-29 21:01:39.190482 | orchestrator | Friday 29 August 2025 20:58:54 +0000 (0:00:01.208) 0:03:17.200 ********* 2025-08-29 21:01:39.190489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:01:39.190539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:01:39.190553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 21:01:39.190653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 21:01:39.190675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.190732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.190750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.190758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.190765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:01:39.190828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.190846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.190853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 21:01:39.190916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 21:01:39.190927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.190941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.190948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.190991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.191016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 21:01:39.191023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.191031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.191072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.191091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.191185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.191285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.191296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191303 | orchestrator | 2025-08-29 21:01:39.191310 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-08-29 21:01:39.191316 | orchestrator | Friday 29 August 2025 20:58:59 +0000 (0:00:04.874) 0:03:22.075 ********* 2025-08-29 21:01:39.191323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:01:39.191331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 21:01:39.191414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.191505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:01:39.191519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.191687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 21:01:39.191744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.191758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191780 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.191787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:01:39.191863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.191870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.191960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.191975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 21:01:39.191987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.192036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.192057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.192064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.192083 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.192090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.192124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 21:01:39.192143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 21:01:39.192150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 21:01:39.192169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:01:39.192195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192245 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.192253 | orchestrator | 2025-08-29 21:01:39.192260 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-08-29 21:01:39.192267 | orchestrator | Friday 29 August 2025 20:59:00 +0000 (0:00:01.525) 0:03:23.601 ********* 2025-08-29 21:01:39.192274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 21:01:39.192285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 21:01:39.192292 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.192299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 21:01:39.192305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 21:01:39.192317 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.192324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 21:01:39.192331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 21:01:39.192337 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.192344 | orchestrator | 2025-08-29 21:01:39.192350 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-08-29 21:01:39.192356 | orchestrator | Friday 29 August 2025 20:59:02 +0000 (0:00:01.477) 0:03:25.078 ********* 2025-08-29 21:01:39.192362 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.192368 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.192375 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.192381 | orchestrator | 2025-08-29 21:01:39.192387 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-08-29 21:01:39.192393 | orchestrator | Friday 29 August 2025 20:59:03 +0000 (0:00:01.845) 0:03:26.924 ********* 2025-08-29 21:01:39.192399 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.192405 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.192412 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.192418 | orchestrator | 2025-08-29 21:01:39.192424 | orchestrator | TASK [include_role : placement] ************************************************ 2025-08-29 21:01:39.192430 | orchestrator | Friday 29 August 2025 20:59:06 +0000 (0:00:02.071) 0:03:28.995 ********* 2025-08-29 21:01:39.192436 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.192442 | orchestrator | 2025-08-29 21:01:39.192448 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-08-29 21:01:39.192455 | orchestrator | Friday 29 August 2025 20:59:07 +0000 (0:00:01.146) 0:03:30.142 ********* 2025-08-29 21:01:39.192461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.192488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.192505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.192512 | orchestrator | 2025-08-29 21:01:39.192519 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-08-29 21:01:39.192525 | orchestrator | Friday 29 August 2025 20:59:10 +0000 (0:00:03.216) 0:03:33.358 ********* 2025-08-29 21:01:39.192532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.192538 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.192545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.192551 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.192574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.192586 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.192592 | orchestrator | 2025-08-29 21:01:39.192598 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-08-29 21:01:39.192604 | orchestrator | Friday 29 August 2025 20:59:11 +0000 (0:00:00.826) 0:03:34.185 ********* 2025-08-29 21:01:39.192611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 21:01:39.192620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 21:01:39.192627 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.192634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 21:01:39.192640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 21:01:39.192646 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.192653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 21:01:39.192659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 21:01:39.192665 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.192672 | orchestrator | 2025-08-29 21:01:39.192679 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-08-29 21:01:39.192686 | orchestrator | Friday 29 August 2025 20:59:11 +0000 (0:00:00.719) 0:03:34.905 ********* 2025-08-29 21:01:39.192693 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.192701 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.192708 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.192715 | orchestrator | 2025-08-29 21:01:39.192721 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-08-29 21:01:39.192728 | orchestrator | Friday 29 August 2025 20:59:13 +0000 (0:00:01.224) 0:03:36.129 ********* 2025-08-29 21:01:39.192736 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.192742 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.192749 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.192756 | orchestrator | 2025-08-29 21:01:39.192763 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-08-29 21:01:39.192770 | orchestrator | Friday 29 August 2025 20:59:15 +0000 (0:00:01.967) 0:03:38.096 ********* 2025-08-29 21:01:39.192778 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.192785 | orchestrator | 2025-08-29 21:01:39.192792 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-08-29 21:01:39.192798 | orchestrator | Friday 29 August 2025 20:59:16 +0000 (0:00:01.459) 0:03:39.556 ********* 2025-08-29 21:01:39.192822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.192835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.192859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.192907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192920 | orchestrator | 2025-08-29 21:01:39.192926 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-08-29 21:01:39.192933 | orchestrator | Friday 29 August 2025 20:59:20 +0000 (0:00:04.244) 0:03:43.800 ********* 2025-08-29 21:01:39.192939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.192967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.192981 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.192991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.192998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.193005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.193015 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.193038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.193049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.193055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.193062 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.193068 | orchestrator | 2025-08-29 21:01:39.193075 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-08-29 21:01:39.193081 | orchestrator | Friday 29 August 2025 20:59:21 +0000 (0:00:00.604) 0:03:44.405 ********* 2025-08-29 21:01:39.193087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193117 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.193124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193149 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.193172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 21:01:39.193198 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.193217 | orchestrator | 2025-08-29 21:01:39.193223 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-08-29 21:01:39.193230 | orchestrator | Friday 29 August 2025 20:59:22 +0000 (0:00:01.189) 0:03:45.595 ********* 2025-08-29 21:01:39.193236 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.193242 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.193248 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.193254 | orchestrator | 2025-08-29 21:01:39.193264 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-08-29 21:01:39.193270 | orchestrator | Friday 29 August 2025 20:59:24 +0000 (0:00:01.423) 0:03:47.018 ********* 2025-08-29 21:01:39.193277 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.193283 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.193289 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.193295 | orchestrator | 2025-08-29 21:01:39.193301 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-08-29 21:01:39.193307 | orchestrator | Friday 29 August 2025 20:59:26 +0000 (0:00:01.999) 0:03:49.017 ********* 2025-08-29 21:01:39.193313 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.193319 | orchestrator | 2025-08-29 21:01:39.193326 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-08-29 21:01:39.193332 | orchestrator | Friday 29 August 2025 20:59:27 +0000 (0:00:01.493) 0:03:50.511 ********* 2025-08-29 21:01:39.193338 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-08-29 21:01:39.193349 | orchestrator | 2025-08-29 21:01:39.193355 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-08-29 21:01:39.193361 | orchestrator | Friday 29 August 2025 20:59:28 +0000 (0:00:00.830) 0:03:51.341 ********* 2025-08-29 21:01:39.193368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 21:01:39.193375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 21:01:39.193381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 21:01:39.193388 | orchestrator | 2025-08-29 21:01:39.193394 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-08-29 21:01:39.193401 | orchestrator | Friday 29 August 2025 20:59:32 +0000 (0:00:03.784) 0:03:55.126 ********* 2025-08-29 21:01:39.193425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 21:01:39.193433 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.193440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 21:01:39.193446 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.193458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 21:01:39.193465 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.193471 | orchestrator | 2025-08-29 21:01:39.193477 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-08-29 21:01:39.193487 | orchestrator | Friday 29 August 2025 20:59:33 +0000 (0:00:01.314) 0:03:56.440 ********* 2025-08-29 21:01:39.193494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 21:01:39.193501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 21:01:39.193507 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.193514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 21:01:39.193520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 21:01:39.193527 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.193533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 21:01:39.193539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 21:01:39.193546 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.193552 | orchestrator | 2025-08-29 21:01:39.193558 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 21:01:39.193564 | orchestrator | Friday 29 August 2025 20:59:34 +0000 (0:00:01.410) 0:03:57.851 ********* 2025-08-29 21:01:39.193570 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.193576 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.193583 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.193589 | orchestrator | 2025-08-29 21:01:39.193595 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 21:01:39.193601 | orchestrator | Friday 29 August 2025 20:59:37 +0000 (0:00:02.464) 0:04:00.315 ********* 2025-08-29 21:01:39.193607 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.193613 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.193620 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.193626 | orchestrator | 2025-08-29 21:01:39.193632 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-08-29 21:01:39.193638 | orchestrator | Friday 29 August 2025 20:59:40 +0000 (0:00:02.833) 0:04:03.149 ********* 2025-08-29 21:01:39.193644 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-08-29 21:01:39.193651 | orchestrator | 2025-08-29 21:01:39.193657 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-08-29 21:01:39.193680 | orchestrator | Friday 29 August 2025 20:59:41 +0000 (0:00:01.327) 0:04:04.477 ********* 2025-08-29 21:01:39.193688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 21:01:39.193699 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.193708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 21:01:39.193715 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.193722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 21:01:39.193728 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.193734 | orchestrator | 2025-08-29 21:01:39.193740 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-08-29 21:01:39.193747 | orchestrator | Friday 29 August 2025 20:59:42 +0000 (0:00:01.193) 0:04:05.671 ********* 2025-08-29 21:01:39.193753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 21:01:39.193760 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.193766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 21:01:39.193773 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.193779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 21:01:39.193785 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.193791 | orchestrator | 2025-08-29 21:01:39.193798 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-08-29 21:01:39.193804 | orchestrator | Friday 29 August 2025 20:59:44 +0000 (0:00:01.304) 0:04:06.975 ********* 2025-08-29 21:01:39.193810 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.193816 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.193822 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.193828 | orchestrator | 2025-08-29 21:01:39.193835 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 21:01:39.193862 | orchestrator | Friday 29 August 2025 20:59:45 +0000 (0:00:01.853) 0:04:08.829 ********* 2025-08-29 21:01:39.193869 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.193876 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.193882 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.193888 | orchestrator | 2025-08-29 21:01:39.193894 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 21:01:39.193900 | orchestrator | Friday 29 August 2025 20:59:49 +0000 (0:00:03.133) 0:04:11.962 ********* 2025-08-29 21:01:39.193907 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.193913 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.193919 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.193925 | orchestrator | 2025-08-29 21:01:39.193931 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-08-29 21:01:39.193937 | orchestrator | Friday 29 August 2025 20:59:53 +0000 (0:00:03.992) 0:04:15.955 ********* 2025-08-29 21:01:39.193944 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-08-29 21:01:39.193950 | orchestrator | 2025-08-29 21:01:39.193956 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-08-29 21:01:39.193962 | orchestrator | Friday 29 August 2025 20:59:53 +0000 (0:00:00.949) 0:04:16.904 ********* 2025-08-29 21:01:39.193972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 21:01:39.193979 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.193985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 21:01:39.193991 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.193998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 21:01:39.194004 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.194011 | orchestrator | 2025-08-29 21:01:39.194036 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-08-29 21:01:39.194044 | orchestrator | Friday 29 August 2025 20:59:55 +0000 (0:00:01.519) 0:04:18.424 ********* 2025-08-29 21:01:39.194051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 21:01:39.194062 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.194068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 21:01:39.194075 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.194102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 21:01:39.194110 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.194116 | orchestrator | 2025-08-29 21:01:39.194123 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-08-29 21:01:39.194129 | orchestrator | Friday 29 August 2025 20:59:56 +0000 (0:00:01.502) 0:04:19.927 ********* 2025-08-29 21:01:39.194135 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.194141 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.194147 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.194154 | orchestrator | 2025-08-29 21:01:39.194160 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 21:01:39.194166 | orchestrator | Friday 29 August 2025 20:59:58 +0000 (0:00:01.393) 0:04:21.320 ********* 2025-08-29 21:01:39.194172 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.194178 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.194184 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.194191 | orchestrator | 2025-08-29 21:01:39.194197 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 21:01:39.194238 | orchestrator | Friday 29 August 2025 21:00:00 +0000 (0:00:02.503) 0:04:23.824 ********* 2025-08-29 21:01:39.194248 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.194255 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.194261 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.194267 | orchestrator | 2025-08-29 21:01:39.194273 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-08-29 21:01:39.194280 | orchestrator | Friday 29 August 2025 21:00:03 +0000 (0:00:02.899) 0:04:26.724 ********* 2025-08-29 21:01:39.194286 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.194292 | orchestrator | 2025-08-29 21:01:39.194298 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-08-29 21:01:39.194304 | orchestrator | Friday 29 August 2025 21:00:05 +0000 (0:00:01.569) 0:04:28.293 ********* 2025-08-29 21:01:39.194311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.194325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 21:01:39.194332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.194376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.194382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 21:01:39.194393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.194432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.194441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 21:01:39.194448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.194471 | orchestrator | 2025-08-29 21:01:39.194478 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-08-29 21:01:39.194484 | orchestrator | Friday 29 August 2025 21:00:08 +0000 (0:00:03.294) 0:04:31.588 ********* 2025-08-29 21:01:39.194508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.194516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 21:01:39.194526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.194549 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.194555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.194575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 21:01:39.194582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.194607 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.194613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.194619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 21:01:39.194640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 21:01:39.194655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:01:39.194665 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.194671 | orchestrator | 2025-08-29 21:01:39.194676 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-08-29 21:01:39.194682 | orchestrator | Friday 29 August 2025 21:00:09 +0000 (0:00:00.814) 0:04:32.402 ********* 2025-08-29 21:01:39.194687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 21:01:39.194693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 21:01:39.194698 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.194704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 21:01:39.194710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 21:01:39.194715 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.194720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 21:01:39.194726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 21:01:39.194732 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.194737 | orchestrator | 2025-08-29 21:01:39.194742 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-08-29 21:01:39.194748 | orchestrator | Friday 29 August 2025 21:00:10 +0000 (0:00:01.016) 0:04:33.418 ********* 2025-08-29 21:01:39.194753 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.194758 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.194764 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.194769 | orchestrator | 2025-08-29 21:01:39.194775 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-08-29 21:01:39.194780 | orchestrator | Friday 29 August 2025 21:00:11 +0000 (0:00:01.264) 0:04:34.682 ********* 2025-08-29 21:01:39.194785 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.194791 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.194796 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.194802 | orchestrator | 2025-08-29 21:01:39.194807 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-08-29 21:01:39.194813 | orchestrator | Friday 29 August 2025 21:00:13 +0000 (0:00:02.047) 0:04:36.729 ********* 2025-08-29 21:01:39.194818 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.194823 | orchestrator | 2025-08-29 21:01:39.194829 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-08-29 21:01:39.194834 | orchestrator | Friday 29 August 2025 21:00:15 +0000 (0:00:01.588) 0:04:38.318 ********* 2025-08-29 21:01:39.194856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:01:39.194870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:01:39.194876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:01:39.194882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:01:39.194904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:01:39.194918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:01:39.194924 | orchestrator | 2025-08-29 21:01:39.194930 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-08-29 21:01:39.194936 | orchestrator | Friday 29 August 2025 21:00:20 +0000 (0:00:05.191) 0:04:43.510 ********* 2025-08-29 21:01:39.194941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 21:01:39.194947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 21:01:39.194953 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.194975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 21:01:39.194988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 21:01:39.194995 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.195000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 21:01:39.195006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 21:01:39.195012 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.195018 | orchestrator | 2025-08-29 21:01:39.195023 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-08-29 21:01:39.195028 | orchestrator | Friday 29 August 2025 21:00:21 +0000 (0:00:00.638) 0:04:44.148 ********* 2025-08-29 21:01:39.195034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 21:01:39.195058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 21:01:39.195065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 21:01:39.195071 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.195076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 21:01:39.195082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 21:01:39.195087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 21:01:39.195093 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.195101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 21:01:39.195107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 21:01:39.195112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 21:01:39.195118 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.195123 | orchestrator | 2025-08-29 21:01:39.195129 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-08-29 21:01:39.195134 | orchestrator | Friday 29 August 2025 21:00:22 +0000 (0:00:01.487) 0:04:45.636 ********* 2025-08-29 21:01:39.195140 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.195145 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.195150 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.195156 | orchestrator | 2025-08-29 21:01:39.195161 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-08-29 21:01:39.195167 | orchestrator | Friday 29 August 2025 21:00:23 +0000 (0:00:00.426) 0:04:46.062 ********* 2025-08-29 21:01:39.195172 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.195177 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.195183 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.195189 | orchestrator | 2025-08-29 21:01:39.195194 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-08-29 21:01:39.195211 | orchestrator | Friday 29 August 2025 21:00:24 +0000 (0:00:01.275) 0:04:47.338 ********* 2025-08-29 21:01:39.195217 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.195223 | orchestrator | 2025-08-29 21:01:39.195228 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-08-29 21:01:39.195233 | orchestrator | Friday 29 August 2025 21:00:26 +0000 (0:00:01.655) 0:04:48.994 ********* 2025-08-29 21:01:39.195239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 21:01:39.195266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 21:01:39.195273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:01:39.195282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:01:39.195288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 21:01:39.195354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:01:39.195360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 21:01:39.195391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 21:01:39.195428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 21:01:39.195439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 21:01:39.195449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 21:01:39.195501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 21:01:39.195510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195531 | orchestrator | 2025-08-29 21:01:39.195537 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-08-29 21:01:39.195542 | orchestrator | Friday 29 August 2025 21:00:30 +0000 (0:00:04.010) 0:04:53.005 ********* 2025-08-29 21:01:39.195548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 21:01:39.195559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:01:39.195564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 21:01:39.195594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 21:01:39.195603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 21:01:39.195609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:01:39.195615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195652 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.195658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 21:01:39.195673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 21:01:39.195679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 21:01:39.195712 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.195717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:01:39.195723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 21:01:39.195755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 21:01:39.195761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:01:39.195775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:01:39.195781 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.195786 | orchestrator | 2025-08-29 21:01:39.195792 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-08-29 21:01:39.195797 | orchestrator | Friday 29 August 2025 21:00:30 +0000 (0:00:00.844) 0:04:53.849 ********* 2025-08-29 21:01:39.195803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 21:01:39.195809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 21:01:39.195815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 21:01:39.195828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 21:01:39.195834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 21:01:39.195840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 21:01:39.195846 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.195852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 21:01:39.195858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 21:01:39.195863 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.195869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 21:01:39.195875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 21:01:39.195880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 21:01:39.195886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 21:01:39.195891 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.195897 | orchestrator | 2025-08-29 21:01:39.195902 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-08-29 21:01:39.195908 | orchestrator | Friday 29 August 2025 21:00:32 +0000 (0:00:01.202) 0:04:55.051 ********* 2025-08-29 21:01:39.195913 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.195919 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.195924 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.195929 | orchestrator | 2025-08-29 21:01:39.195935 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-08-29 21:01:39.195940 | orchestrator | Friday 29 August 2025 21:00:32 +0000 (0:00:00.438) 0:04:55.490 ********* 2025-08-29 21:01:39.195948 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.195954 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.195960 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.195965 | orchestrator | 2025-08-29 21:01:39.195970 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-08-29 21:01:39.195980 | orchestrator | Friday 29 August 2025 21:00:33 +0000 (0:00:01.230) 0:04:56.720 ********* 2025-08-29 21:01:39.195985 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.195991 | orchestrator | 2025-08-29 21:01:39.195996 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-08-29 21:01:39.196002 | orchestrator | Friday 29 August 2025 21:00:35 +0000 (0:00:01.401) 0:04:58.122 ********* 2025-08-29 21:01:39.196010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 21:01:39.196016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 21:01:39.196022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 21:01:39.196028 | orchestrator | 2025-08-29 21:01:39.196034 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-08-29 21:01:39.196039 | orchestrator | Friday 29 August 2025 21:00:37 +0000 (0:00:02.400) 0:05:00.522 ********* 2025-08-29 21:01:39.196047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 21:01:39.196061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 21:01:39.196067 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196072 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 21:01:39.196084 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196089 | orchestrator | 2025-08-29 21:01:39.196094 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-08-29 21:01:39.196100 | orchestrator | Friday 29 August 2025 21:00:37 +0000 (0:00:00.349) 0:05:00.872 ********* 2025-08-29 21:01:39.196105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 21:01:39.196111 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 21:01:39.196122 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 21:01:39.196133 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196138 | orchestrator | 2025-08-29 21:01:39.196147 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-08-29 21:01:39.196152 | orchestrator | Friday 29 August 2025 21:00:38 +0000 (0:00:00.536) 0:05:01.408 ********* 2025-08-29 21:01:39.196158 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196163 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196168 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196174 | orchestrator | 2025-08-29 21:01:39.196179 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-08-29 21:01:39.196185 | orchestrator | Friday 29 August 2025 21:00:39 +0000 (0:00:00.612) 0:05:02.021 ********* 2025-08-29 21:01:39.196190 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196195 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196212 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196218 | orchestrator | 2025-08-29 21:01:39.196224 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-08-29 21:01:39.196232 | orchestrator | Friday 29 August 2025 21:00:40 +0000 (0:00:01.053) 0:05:03.075 ********* 2025-08-29 21:01:39.196238 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:01:39.196244 | orchestrator | 2025-08-29 21:01:39.196249 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-08-29 21:01:39.196254 | orchestrator | Friday 29 August 2025 21:00:41 +0000 (0:00:01.329) 0:05:04.404 ********* 2025-08-29 21:01:39.196263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.196269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.196290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.196301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.196311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.196320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 21:01:39.196325 | orchestrator | 2025-08-29 21:01:39.196331 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-08-29 21:01:39.196337 | orchestrator | Friday 29 August 2025 21:00:47 +0000 (0:00:06.475) 0:05:10.879 ********* 2025-08-29 21:01:39.196342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.196352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.196358 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.196375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.196381 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.196397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 21:01:39.196402 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196408 | orchestrator | 2025-08-29 21:01:39.196413 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-08-29 21:01:39.196419 | orchestrator | Friday 29 August 2025 21:00:48 +0000 (0:00:00.644) 0:05:11.524 ********* 2025-08-29 21:01:39.196425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196450 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196480 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 21:01:39.196511 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196517 | orchestrator | 2025-08-29 21:01:39.196522 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-08-29 21:01:39.196528 | orchestrator | Friday 29 August 2025 21:00:49 +0000 (0:00:00.896) 0:05:12.421 ********* 2025-08-29 21:01:39.196533 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.196539 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.196544 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.196549 | orchestrator | 2025-08-29 21:01:39.196555 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-08-29 21:01:39.196560 | orchestrator | Friday 29 August 2025 21:00:51 +0000 (0:00:01.998) 0:05:14.420 ********* 2025-08-29 21:01:39.196566 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.196571 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.196576 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.196582 | orchestrator | 2025-08-29 21:01:39.196587 | orchestrator | TASK [include_role : swift] **************************************************** 2025-08-29 21:01:39.196593 | orchestrator | Friday 29 August 2025 21:00:53 +0000 (0:00:02.249) 0:05:16.669 ********* 2025-08-29 21:01:39.196598 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196604 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196609 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196614 | orchestrator | 2025-08-29 21:01:39.196620 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-08-29 21:01:39.196625 | orchestrator | Friday 29 August 2025 21:00:54 +0000 (0:00:00.342) 0:05:17.012 ********* 2025-08-29 21:01:39.196630 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196636 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196641 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196646 | orchestrator | 2025-08-29 21:01:39.196652 | orchestrator | TASK [include_role : trove] **************************************************** 2025-08-29 21:01:39.196657 | orchestrator | Friday 29 August 2025 21:00:54 +0000 (0:00:00.319) 0:05:17.331 ********* 2025-08-29 21:01:39.196663 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196668 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196673 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196679 | orchestrator | 2025-08-29 21:01:39.196684 | orchestrator | TASK [include_role : venus] **************************************************** 2025-08-29 21:01:39.196689 | orchestrator | Friday 29 August 2025 21:00:54 +0000 (0:00:00.320) 0:05:17.652 ********* 2025-08-29 21:01:39.196697 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196703 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196709 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196714 | orchestrator | 2025-08-29 21:01:39.196719 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-08-29 21:01:39.196725 | orchestrator | Friday 29 August 2025 21:00:55 +0000 (0:00:00.614) 0:05:18.267 ********* 2025-08-29 21:01:39.196730 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196735 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196741 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196746 | orchestrator | 2025-08-29 21:01:39.196752 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-08-29 21:01:39.196757 | orchestrator | Friday 29 August 2025 21:00:55 +0000 (0:00:00.355) 0:05:18.622 ********* 2025-08-29 21:01:39.196768 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.196773 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.196778 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.196784 | orchestrator | 2025-08-29 21:01:39.196789 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-08-29 21:01:39.196795 | orchestrator | Friday 29 August 2025 21:00:56 +0000 (0:00:00.501) 0:05:19.123 ********* 2025-08-29 21:01:39.196800 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.196805 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.196811 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.196816 | orchestrator | 2025-08-29 21:01:39.196822 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-08-29 21:01:39.196827 | orchestrator | Friday 29 August 2025 21:00:57 +0000 (0:00:01.031) 0:05:20.154 ********* 2025-08-29 21:01:39.196835 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.196841 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.196846 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.196852 | orchestrator | 2025-08-29 21:01:39.196857 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-08-29 21:01:39.196862 | orchestrator | Friday 29 August 2025 21:00:57 +0000 (0:00:00.355) 0:05:20.510 ********* 2025-08-29 21:01:39.196868 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.196873 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.196879 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.196884 | orchestrator | 2025-08-29 21:01:39.196890 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-08-29 21:01:39.196895 | orchestrator | Friday 29 August 2025 21:00:58 +0000 (0:00:00.886) 0:05:21.396 ********* 2025-08-29 21:01:39.196900 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.196906 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.196911 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.196916 | orchestrator | 2025-08-29 21:01:39.196922 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-08-29 21:01:39.196927 | orchestrator | Friday 29 August 2025 21:00:59 +0000 (0:00:00.937) 0:05:22.333 ********* 2025-08-29 21:01:39.196933 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.196938 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.196943 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.196949 | orchestrator | 2025-08-29 21:01:39.196954 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-08-29 21:01:39.196960 | orchestrator | Friday 29 August 2025 21:01:00 +0000 (0:00:01.167) 0:05:23.500 ********* 2025-08-29 21:01:39.196965 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.196970 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.196976 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.196981 | orchestrator | 2025-08-29 21:01:39.196987 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-08-29 21:01:39.196992 | orchestrator | Friday 29 August 2025 21:01:05 +0000 (0:00:05.121) 0:05:28.622 ********* 2025-08-29 21:01:39.196997 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.197003 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.197008 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.197014 | orchestrator | 2025-08-29 21:01:39.197019 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-08-29 21:01:39.197024 | orchestrator | Friday 29 August 2025 21:01:09 +0000 (0:00:03.771) 0:05:32.394 ********* 2025-08-29 21:01:39.197030 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.197035 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.197040 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.197046 | orchestrator | 2025-08-29 21:01:39.197051 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-08-29 21:01:39.197057 | orchestrator | Friday 29 August 2025 21:01:17 +0000 (0:00:08.526) 0:05:40.921 ********* 2025-08-29 21:01:39.197062 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.197068 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.197073 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.197082 | orchestrator | 2025-08-29 21:01:39.197088 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-08-29 21:01:39.197093 | orchestrator | Friday 29 August 2025 21:01:21 +0000 (0:00:03.746) 0:05:44.668 ********* 2025-08-29 21:01:39.197098 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:01:39.197104 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:01:39.197109 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:01:39.197115 | orchestrator | 2025-08-29 21:01:39.197120 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-08-29 21:01:39.197125 | orchestrator | Friday 29 August 2025 21:01:32 +0000 (0:00:10.868) 0:05:55.536 ********* 2025-08-29 21:01:39.197131 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.197136 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.197141 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.197147 | orchestrator | 2025-08-29 21:01:39.197152 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-08-29 21:01:39.197157 | orchestrator | Friday 29 August 2025 21:01:32 +0000 (0:00:00.356) 0:05:55.892 ********* 2025-08-29 21:01:39.197163 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.197168 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.197174 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.197179 | orchestrator | 2025-08-29 21:01:39.197184 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-08-29 21:01:39.197190 | orchestrator | Friday 29 August 2025 21:01:33 +0000 (0:00:00.352) 0:05:56.244 ********* 2025-08-29 21:01:39.197195 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.197212 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.197220 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.197226 | orchestrator | 2025-08-29 21:01:39.197232 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-08-29 21:01:39.197237 | orchestrator | Friday 29 August 2025 21:01:33 +0000 (0:00:00.362) 0:05:56.607 ********* 2025-08-29 21:01:39.197242 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.197248 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.197253 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.197259 | orchestrator | 2025-08-29 21:01:39.197264 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-08-29 21:01:39.197269 | orchestrator | Friday 29 August 2025 21:01:34 +0000 (0:00:00.742) 0:05:57.349 ********* 2025-08-29 21:01:39.197275 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.197280 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.197285 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.197291 | orchestrator | 2025-08-29 21:01:39.197296 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-08-29 21:01:39.197302 | orchestrator | Friday 29 August 2025 21:01:34 +0000 (0:00:00.350) 0:05:57.699 ********* 2025-08-29 21:01:39.197307 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:01:39.197313 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:01:39.197318 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:01:39.197323 | orchestrator | 2025-08-29 21:01:39.197329 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-08-29 21:01:39.197334 | orchestrator | Friday 29 August 2025 21:01:35 +0000 (0:00:00.354) 0:05:58.054 ********* 2025-08-29 21:01:39.197340 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.197345 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.197353 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.197359 | orchestrator | 2025-08-29 21:01:39.197364 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-08-29 21:01:39.197370 | orchestrator | Friday 29 August 2025 21:01:36 +0000 (0:00:01.324) 0:05:59.378 ********* 2025-08-29 21:01:39.197375 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:01:39.197380 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:01:39.197386 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:01:39.197391 | orchestrator | 2025-08-29 21:01:39.197400 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:01:39.197406 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 21:01:39.197412 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 21:01:39.197417 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 21:01:39.197423 | orchestrator | 2025-08-29 21:01:39.197428 | orchestrator | 2025-08-29 21:01:39.197434 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:01:39.197439 | orchestrator | Friday 29 August 2025 21:01:37 +0000 (0:00:01.210) 0:06:00.589 ********* 2025-08-29 21:01:39.197444 | orchestrator | =============================================================================== 2025-08-29 21:01:39.197450 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 10.87s 2025-08-29 21:01:39.197455 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.53s 2025-08-29 21:01:39.197460 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.48s 2025-08-29 21:01:39.197466 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.70s 2025-08-29 21:01:39.197471 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.19s 2025-08-29 21:01:39.197476 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.12s 2025-08-29 21:01:39.197482 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.87s 2025-08-29 21:01:39.197487 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.35s 2025-08-29 21:01:39.197492 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.28s 2025-08-29 21:01:39.197498 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.24s 2025-08-29 21:01:39.197503 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.20s 2025-08-29 21:01:39.197509 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.01s 2025-08-29 21:01:39.197514 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 3.99s 2025-08-29 21:01:39.197519 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.80s 2025-08-29 21:01:39.197525 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.80s 2025-08-29 21:01:39.197530 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.78s 2025-08-29 21:01:39.197535 | orchestrator | loadbalancer : Wait for backup haproxy to start ------------------------- 3.77s 2025-08-29 21:01:39.197541 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.75s 2025-08-29 21:01:39.197546 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.64s 2025-08-29 21:01:39.197551 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.55s 2025-08-29 21:01:39.197557 | orchestrator | 2025-08-29 21:01:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:42.207448 | orchestrator | 2025-08-29 21:01:42 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:42.208056 | orchestrator | 2025-08-29 21:01:42 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:01:42.208758 | orchestrator | 2025-08-29 21:01:42 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:01:42.208986 | orchestrator | 2025-08-29 21:01:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:45.239422 | orchestrator | 2025-08-29 21:01:45 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:45.243274 | orchestrator | 2025-08-29 21:01:45 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:01:45.243646 | orchestrator | 2025-08-29 21:01:45 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:01:45.243901 | orchestrator | 2025-08-29 21:01:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:48.277861 | orchestrator | 2025-08-29 21:01:48 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:48.279696 | orchestrator | 2025-08-29 21:01:48 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:01:48.280691 | orchestrator | 2025-08-29 21:01:48 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:01:48.280895 | orchestrator | 2025-08-29 21:01:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:51.310506 | orchestrator | 2025-08-29 21:01:51 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:51.311454 | orchestrator | 2025-08-29 21:01:51 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:01:51.312695 | orchestrator | 2025-08-29 21:01:51 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:01:51.313110 | orchestrator | 2025-08-29 21:01:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:54.341376 | orchestrator | 2025-08-29 21:01:54 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:54.342121 | orchestrator | 2025-08-29 21:01:54 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:01:54.346215 | orchestrator | 2025-08-29 21:01:54 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:01:54.346332 | orchestrator | 2025-08-29 21:01:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:01:57.379108 | orchestrator | 2025-08-29 21:01:57 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:01:57.379568 | orchestrator | 2025-08-29 21:01:57 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:01:57.379764 | orchestrator | 2025-08-29 21:01:57 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:01:57.379786 | orchestrator | 2025-08-29 21:01:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:00.408992 | orchestrator | 2025-08-29 21:02:00 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:00.409066 | orchestrator | 2025-08-29 21:02:00 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:00.409081 | orchestrator | 2025-08-29 21:02:00 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:00.409093 | orchestrator | 2025-08-29 21:02:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:03.448576 | orchestrator | 2025-08-29 21:02:03 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:03.448794 | orchestrator | 2025-08-29 21:02:03 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:03.449387 | orchestrator | 2025-08-29 21:02:03 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:03.449492 | orchestrator | 2025-08-29 21:02:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:06.488414 | orchestrator | 2025-08-29 21:02:06 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:06.489648 | orchestrator | 2025-08-29 21:02:06 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:06.491214 | orchestrator | 2025-08-29 21:02:06 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:06.491239 | orchestrator | 2025-08-29 21:02:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:09.522374 | orchestrator | 2025-08-29 21:02:09 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:09.524394 | orchestrator | 2025-08-29 21:02:09 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:09.526647 | orchestrator | 2025-08-29 21:02:09 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:09.526927 | orchestrator | 2025-08-29 21:02:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:12.571644 | orchestrator | 2025-08-29 21:02:12 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:12.572675 | orchestrator | 2025-08-29 21:02:12 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:12.574364 | orchestrator | 2025-08-29 21:02:12 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:12.574591 | orchestrator | 2025-08-29 21:02:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:15.612410 | orchestrator | 2025-08-29 21:02:15 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:15.612998 | orchestrator | 2025-08-29 21:02:15 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:15.614847 | orchestrator | 2025-08-29 21:02:15 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:15.614883 | orchestrator | 2025-08-29 21:02:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:18.653407 | orchestrator | 2025-08-29 21:02:18 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:18.655000 | orchestrator | 2025-08-29 21:02:18 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:18.657182 | orchestrator | 2025-08-29 21:02:18 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:18.657209 | orchestrator | 2025-08-29 21:02:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:21.699715 | orchestrator | 2025-08-29 21:02:21 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:21.702304 | orchestrator | 2025-08-29 21:02:21 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:21.703366 | orchestrator | 2025-08-29 21:02:21 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:21.704480 | orchestrator | 2025-08-29 21:02:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:24.737345 | orchestrator | 2025-08-29 21:02:24 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:24.738984 | orchestrator | 2025-08-29 21:02:24 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:24.740650 | orchestrator | 2025-08-29 21:02:24 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:24.740674 | orchestrator | 2025-08-29 21:02:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:27.773817 | orchestrator | 2025-08-29 21:02:27 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:27.775681 | orchestrator | 2025-08-29 21:02:27 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:27.777001 | orchestrator | 2025-08-29 21:02:27 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:27.777182 | orchestrator | 2025-08-29 21:02:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:30.827720 | orchestrator | 2025-08-29 21:02:30 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:30.829870 | orchestrator | 2025-08-29 21:02:30 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:30.831833 | orchestrator | 2025-08-29 21:02:30 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:30.831867 | orchestrator | 2025-08-29 21:02:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:33.879012 | orchestrator | 2025-08-29 21:02:33 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:33.881580 | orchestrator | 2025-08-29 21:02:33 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:33.882935 | orchestrator | 2025-08-29 21:02:33 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:33.883204 | orchestrator | 2025-08-29 21:02:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:36.932014 | orchestrator | 2025-08-29 21:02:36 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:36.933493 | orchestrator | 2025-08-29 21:02:36 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:36.935007 | orchestrator | 2025-08-29 21:02:36 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:36.935037 | orchestrator | 2025-08-29 21:02:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:39.986744 | orchestrator | 2025-08-29 21:02:39 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:39.988733 | orchestrator | 2025-08-29 21:02:39 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:39.991002 | orchestrator | 2025-08-29 21:02:39 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:39.991369 | orchestrator | 2025-08-29 21:02:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:43.050628 | orchestrator | 2025-08-29 21:02:43 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:43.051972 | orchestrator | 2025-08-29 21:02:43 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:43.052901 | orchestrator | 2025-08-29 21:02:43 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:43.053536 | orchestrator | 2025-08-29 21:02:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:46.099492 | orchestrator | 2025-08-29 21:02:46 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:46.101123 | orchestrator | 2025-08-29 21:02:46 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:46.103023 | orchestrator | 2025-08-29 21:02:46 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:46.103042 | orchestrator | 2025-08-29 21:02:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:49.143691 | orchestrator | 2025-08-29 21:02:49 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:49.144946 | orchestrator | 2025-08-29 21:02:49 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:49.146306 | orchestrator | 2025-08-29 21:02:49 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:49.146475 | orchestrator | 2025-08-29 21:02:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:52.197407 | orchestrator | 2025-08-29 21:02:52 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:52.198664 | orchestrator | 2025-08-29 21:02:52 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:52.200283 | orchestrator | 2025-08-29 21:02:52 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:52.200307 | orchestrator | 2025-08-29 21:02:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:55.236612 | orchestrator | 2025-08-29 21:02:55 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:55.237253 | orchestrator | 2025-08-29 21:02:55 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:55.238439 | orchestrator | 2025-08-29 21:02:55 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:55.238476 | orchestrator | 2025-08-29 21:02:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:02:58.271379 | orchestrator | 2025-08-29 21:02:58 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:02:58.275312 | orchestrator | 2025-08-29 21:02:58 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:02:58.277406 | orchestrator | 2025-08-29 21:02:58 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:02:58.277599 | orchestrator | 2025-08-29 21:02:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:01.319183 | orchestrator | 2025-08-29 21:03:01 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:01.321196 | orchestrator | 2025-08-29 21:03:01 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:01.322727 | orchestrator | 2025-08-29 21:03:01 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:01.322762 | orchestrator | 2025-08-29 21:03:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:04.364383 | orchestrator | 2025-08-29 21:03:04 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:04.365813 | orchestrator | 2025-08-29 21:03:04 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:04.367731 | orchestrator | 2025-08-29 21:03:04 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:04.367758 | orchestrator | 2025-08-29 21:03:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:07.412523 | orchestrator | 2025-08-29 21:03:07 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:07.413999 | orchestrator | 2025-08-29 21:03:07 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:07.415028 | orchestrator | 2025-08-29 21:03:07 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:07.415053 | orchestrator | 2025-08-29 21:03:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:10.464332 | orchestrator | 2025-08-29 21:03:10 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:10.466886 | orchestrator | 2025-08-29 21:03:10 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:10.467955 | orchestrator | 2025-08-29 21:03:10 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:10.468458 | orchestrator | 2025-08-29 21:03:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:13.511121 | orchestrator | 2025-08-29 21:03:13 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:13.512145 | orchestrator | 2025-08-29 21:03:13 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:13.513740 | orchestrator | 2025-08-29 21:03:13 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:13.513757 | orchestrator | 2025-08-29 21:03:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:16.555013 | orchestrator | 2025-08-29 21:03:16 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:16.555436 | orchestrator | 2025-08-29 21:03:16 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:16.557831 | orchestrator | 2025-08-29 21:03:16 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:16.558238 | orchestrator | 2025-08-29 21:03:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:19.599779 | orchestrator | 2025-08-29 21:03:19 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:19.601465 | orchestrator | 2025-08-29 21:03:19 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:19.602628 | orchestrator | 2025-08-29 21:03:19 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:19.602642 | orchestrator | 2025-08-29 21:03:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:22.644527 | orchestrator | 2025-08-29 21:03:22 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:22.647305 | orchestrator | 2025-08-29 21:03:22 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:22.649313 | orchestrator | 2025-08-29 21:03:22 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:22.649927 | orchestrator | 2025-08-29 21:03:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:25.689015 | orchestrator | 2025-08-29 21:03:25 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:25.690385 | orchestrator | 2025-08-29 21:03:25 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:25.691912 | orchestrator | 2025-08-29 21:03:25 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:25.691944 | orchestrator | 2025-08-29 21:03:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:28.738308 | orchestrator | 2025-08-29 21:03:28 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:28.739696 | orchestrator | 2025-08-29 21:03:28 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:28.741261 | orchestrator | 2025-08-29 21:03:28 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:28.741284 | orchestrator | 2025-08-29 21:03:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:31.782732 | orchestrator | 2025-08-29 21:03:31 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:31.784261 | orchestrator | 2025-08-29 21:03:31 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:31.786182 | orchestrator | 2025-08-29 21:03:31 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:31.786213 | orchestrator | 2025-08-29 21:03:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:34.835044 | orchestrator | 2025-08-29 21:03:34 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:34.836794 | orchestrator | 2025-08-29 21:03:34 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:34.838349 | orchestrator | 2025-08-29 21:03:34 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:34.838382 | orchestrator | 2025-08-29 21:03:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:37.889756 | orchestrator | 2025-08-29 21:03:37 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:37.891954 | orchestrator | 2025-08-29 21:03:37 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:37.894593 | orchestrator | 2025-08-29 21:03:37 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:37.894854 | orchestrator | 2025-08-29 21:03:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:40.941647 | orchestrator | 2025-08-29 21:03:40 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:40.944969 | orchestrator | 2025-08-29 21:03:40 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:40.947085 | orchestrator | 2025-08-29 21:03:40 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:40.947617 | orchestrator | 2025-08-29 21:03:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:43.991727 | orchestrator | 2025-08-29 21:03:43 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:43.993133 | orchestrator | 2025-08-29 21:03:43 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:43.994937 | orchestrator | 2025-08-29 21:03:43 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:43.994964 | orchestrator | 2025-08-29 21:03:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:47.033829 | orchestrator | 2025-08-29 21:03:47 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:47.035247 | orchestrator | 2025-08-29 21:03:47 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:47.037136 | orchestrator | 2025-08-29 21:03:47 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:47.037167 | orchestrator | 2025-08-29 21:03:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:50.079578 | orchestrator | 2025-08-29 21:03:50 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:50.081722 | orchestrator | 2025-08-29 21:03:50 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:50.083842 | orchestrator | 2025-08-29 21:03:50 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:50.083979 | orchestrator | 2025-08-29 21:03:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:53.129421 | orchestrator | 2025-08-29 21:03:53 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:53.132238 | orchestrator | 2025-08-29 21:03:53 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:53.134116 | orchestrator | 2025-08-29 21:03:53 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:53.134472 | orchestrator | 2025-08-29 21:03:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:56.180192 | orchestrator | 2025-08-29 21:03:56 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:56.181529 | orchestrator | 2025-08-29 21:03:56 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:56.183267 | orchestrator | 2025-08-29 21:03:56 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:56.183309 | orchestrator | 2025-08-29 21:03:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:03:59.225454 | orchestrator | 2025-08-29 21:03:59 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:03:59.226343 | orchestrator | 2025-08-29 21:03:59 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:03:59.227788 | orchestrator | 2025-08-29 21:03:59 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:03:59.227817 | orchestrator | 2025-08-29 21:03:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:02.267623 | orchestrator | 2025-08-29 21:04:02 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:04:02.269671 | orchestrator | 2025-08-29 21:04:02 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:02.271659 | orchestrator | 2025-08-29 21:04:02 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:02.271736 | orchestrator | 2025-08-29 21:04:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:05.325063 | orchestrator | 2025-08-29 21:04:05 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:04:05.326594 | orchestrator | 2025-08-29 21:04:05 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:05.328187 | orchestrator | 2025-08-29 21:04:05 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:05.328228 | orchestrator | 2025-08-29 21:04:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:08.367033 | orchestrator | 2025-08-29 21:04:08 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state STARTED 2025-08-29 21:04:08.369586 | orchestrator | 2025-08-29 21:04:08 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:08.371059 | orchestrator | 2025-08-29 21:04:08 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:08.371725 | orchestrator | 2025-08-29 21:04:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:11.413507 | orchestrator | 2025-08-29 21:04:11 | INFO  | Task d17d85b9-add6-457b-978d-dd39222789b5 is in state SUCCESS 2025-08-29 21:04:11.415295 | orchestrator | 2025-08-29 21:04:11.415324 | orchestrator | 2025-08-29 21:04:11.415335 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-08-29 21:04:11.415345 | orchestrator | 2025-08-29 21:04:11.415355 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 21:04:11.415366 | orchestrator | Friday 29 August 2025 20:52:42 +0000 (0:00:00.597) 0:00:00.597 ********* 2025-08-29 21:04:11.415376 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.415386 | orchestrator | 2025-08-29 21:04:11.415438 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 21:04:11.415448 | orchestrator | Friday 29 August 2025 20:52:43 +0000 (0:00:00.995) 0:00:01.593 ********* 2025-08-29 21:04:11.415457 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.415467 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.415476 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.415486 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.415494 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.415546 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.415580 | orchestrator | 2025-08-29 21:04:11.415591 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 21:04:11.415638 | orchestrator | Friday 29 August 2025 20:52:45 +0000 (0:00:01.664) 0:00:03.257 ********* 2025-08-29 21:04:11.415649 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.415657 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.415666 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.415675 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.415683 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.415692 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.415701 | orchestrator | 2025-08-29 21:04:11.415709 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 21:04:11.415718 | orchestrator | Friday 29 August 2025 20:52:46 +0000 (0:00:01.078) 0:00:04.336 ********* 2025-08-29 21:04:11.415727 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.415736 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.415744 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.415753 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.415762 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.415847 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.415859 | orchestrator | 2025-08-29 21:04:11.415868 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 21:04:11.415876 | orchestrator | Friday 29 August 2025 20:52:47 +0000 (0:00:01.121) 0:00:05.460 ********* 2025-08-29 21:04:11.415885 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.415894 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.415903 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.415912 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.415923 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.415933 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.415962 | orchestrator | 2025-08-29 21:04:11.415973 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 21:04:11.416003 | orchestrator | Friday 29 August 2025 20:52:48 +0000 (0:00:00.787) 0:00:06.248 ********* 2025-08-29 21:04:11.416013 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.416023 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.416032 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.416065 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.416076 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.416085 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.416114 | orchestrator | 2025-08-29 21:04:11.416126 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 21:04:11.416168 | orchestrator | Friday 29 August 2025 20:52:48 +0000 (0:00:00.576) 0:00:06.825 ********* 2025-08-29 21:04:11.416179 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.416189 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.416198 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.416208 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.416218 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.416228 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.416238 | orchestrator | 2025-08-29 21:04:11.416248 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 21:04:11.416259 | orchestrator | Friday 29 August 2025 20:52:49 +0000 (0:00:01.073) 0:00:07.899 ********* 2025-08-29 21:04:11.416268 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.416278 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.416286 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.416295 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.416304 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.416312 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.416321 | orchestrator | 2025-08-29 21:04:11.416329 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 21:04:11.416338 | orchestrator | Friday 29 August 2025 20:52:50 +0000 (0:00:00.840) 0:00:08.739 ********* 2025-08-29 21:04:11.416347 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.416356 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.416372 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.416380 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.416389 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.416397 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.416406 | orchestrator | 2025-08-29 21:04:11.416415 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 21:04:11.416461 | orchestrator | Friday 29 August 2025 20:52:51 +0000 (0:00:00.804) 0:00:09.544 ********* 2025-08-29 21:04:11.416473 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 21:04:11.416482 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 21:04:11.416490 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 21:04:11.416499 | orchestrator | 2025-08-29 21:04:11.416508 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 21:04:11.416516 | orchestrator | Friday 29 August 2025 20:52:52 +0000 (0:00:00.952) 0:00:10.497 ********* 2025-08-29 21:04:11.416525 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.416534 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.416542 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.416551 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.416559 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.416568 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.416576 | orchestrator | 2025-08-29 21:04:11.416596 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 21:04:11.416605 | orchestrator | Friday 29 August 2025 20:52:53 +0000 (0:00:01.340) 0:00:11.837 ********* 2025-08-29 21:04:11.416614 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 21:04:11.416622 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 21:04:11.416631 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 21:04:11.416640 | orchestrator | 2025-08-29 21:04:11.416649 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 21:04:11.416657 | orchestrator | Friday 29 August 2025 20:52:56 +0000 (0:00:03.088) 0:00:14.926 ********* 2025-08-29 21:04:11.416666 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 21:04:11.416675 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 21:04:11.416683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 21:04:11.416692 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.416701 | orchestrator | 2025-08-29 21:04:11.416709 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 21:04:11.416739 | orchestrator | Friday 29 August 2025 20:52:57 +0000 (0:00:00.707) 0:00:15.633 ********* 2025-08-29 21:04:11.416751 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.416762 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.416771 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.416780 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.416789 | orchestrator | 2025-08-29 21:04:11.416798 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 21:04:11.416923 | orchestrator | Friday 29 August 2025 20:52:58 +0000 (0:00:00.672) 0:00:16.306 ********* 2025-08-29 21:04:11.416935 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.416954 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.416963 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.416972 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.417006 | orchestrator | 2025-08-29 21:04:11.417015 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 21:04:11.417024 | orchestrator | Friday 29 August 2025 20:52:58 +0000 (0:00:00.453) 0:00:16.759 ********* 2025-08-29 21:04:11.417040 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 20:52:54.187339', 'end': '2025-08-29 20:52:54.448011', 'delta': '0:00:00.260672', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.417060 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 20:52:55.180733', 'end': '2025-08-29 20:52:55.478466', 'delta': '0:00:00.297733', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.417071 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 20:52:56.251731', 'end': '2025-08-29 20:52:56.569544', 'delta': '0:00:00.317813', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.417080 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.417089 | orchestrator | 2025-08-29 21:04:11.417097 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 21:04:11.417120 | orchestrator | Friday 29 August 2025 20:52:58 +0000 (0:00:00.304) 0:00:17.063 ********* 2025-08-29 21:04:11.417129 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.417137 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.417146 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.417155 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.417163 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.417172 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.417181 | orchestrator | 2025-08-29 21:04:11.417189 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 21:04:11.417198 | orchestrator | Friday 29 August 2025 20:53:00 +0000 (0:00:01.538) 0:00:18.602 ********* 2025-08-29 21:04:11.417207 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.417215 | orchestrator | 2025-08-29 21:04:11.417224 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 21:04:11.417233 | orchestrator | Friday 29 August 2025 20:53:01 +0000 (0:00:00.746) 0:00:19.349 ********* 2025-08-29 21:04:11.417241 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.417250 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.417259 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.417267 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.417276 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.417285 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.417293 | orchestrator | 2025-08-29 21:04:11.417302 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 21:04:11.417311 | orchestrator | Friday 29 August 2025 20:53:02 +0000 (0:00:01.377) 0:00:20.727 ********* 2025-08-29 21:04:11.417319 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.417328 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.417336 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.417345 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.417353 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.417362 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.417370 | orchestrator | 2025-08-29 21:04:11.417379 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 21:04:11.417388 | orchestrator | Friday 29 August 2025 20:53:04 +0000 (0:00:02.278) 0:00:23.005 ********* 2025-08-29 21:04:11.417396 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.417405 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.417413 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.417422 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.417430 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.417467 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.417477 | orchestrator | 2025-08-29 21:04:11.417523 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 21:04:11.417532 | orchestrator | Friday 29 August 2025 20:53:05 +0000 (0:00:00.954) 0:00:23.960 ********* 2025-08-29 21:04:11.417541 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.417550 | orchestrator | 2025-08-29 21:04:11.417563 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 21:04:11.417603 | orchestrator | Friday 29 August 2025 20:53:05 +0000 (0:00:00.090) 0:00:24.051 ********* 2025-08-29 21:04:11.417612 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.417621 | orchestrator | 2025-08-29 21:04:11.417629 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 21:04:11.417638 | orchestrator | Friday 29 August 2025 20:53:06 +0000 (0:00:00.222) 0:00:24.274 ********* 2025-08-29 21:04:11.417647 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.417656 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.417664 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.417673 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.417682 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.417781 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.417791 | orchestrator | 2025-08-29 21:04:11.417800 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 21:04:11.417842 | orchestrator | Friday 29 August 2025 20:53:06 +0000 (0:00:00.619) 0:00:24.893 ********* 2025-08-29 21:04:11.417853 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.417862 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.417870 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.417879 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.417887 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.417896 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.417904 | orchestrator | 2025-08-29 21:04:11.417913 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 21:04:11.417921 | orchestrator | Friday 29 August 2025 20:53:07 +0000 (0:00:00.884) 0:00:25.778 ********* 2025-08-29 21:04:11.417930 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.417938 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.417947 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.417955 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.417964 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.417972 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.418100 | orchestrator | 2025-08-29 21:04:11.418112 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 21:04:11.418121 | orchestrator | Friday 29 August 2025 20:53:08 +0000 (0:00:00.646) 0:00:26.424 ********* 2025-08-29 21:04:11.418129 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.418138 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.418147 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.418155 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.418164 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.418172 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.418181 | orchestrator | 2025-08-29 21:04:11.418189 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 21:04:11.418198 | orchestrator | Friday 29 August 2025 20:53:09 +0000 (0:00:00.877) 0:00:27.301 ********* 2025-08-29 21:04:11.418207 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.418216 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.418224 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.418233 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.418242 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.418250 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.418259 | orchestrator | 2025-08-29 21:04:11.418268 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 21:04:11.418277 | orchestrator | Friday 29 August 2025 20:53:09 +0000 (0:00:00.732) 0:00:28.034 ********* 2025-08-29 21:04:11.418285 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.418294 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.418303 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.418311 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.418320 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.418328 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.418337 | orchestrator | 2025-08-29 21:04:11.418346 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 21:04:11.418354 | orchestrator | Friday 29 August 2025 20:53:10 +0000 (0:00:00.798) 0:00:28.833 ********* 2025-08-29 21:04:11.418398 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.418408 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.418417 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.418426 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.418434 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.418443 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.418452 | orchestrator | 2025-08-29 21:04:11.418461 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 21:04:11.418469 | orchestrator | Friday 29 August 2025 20:53:11 +0000 (0:00:00.683) 0:00:29.516 ********* 2025-08-29 21:04:11.418486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5', 'scsi-SQEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.418606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.418640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418659 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.418762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91', 'scsi-SQEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part1', 'scsi-SQEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part14', 'scsi-SQEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part15', 'scsi-SQEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part16', 'scsi-SQEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.418850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.418865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.418994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed', 'scsi-SQEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part1', 'scsi-SQEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part14', 'scsi-SQEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part15', 'scsi-SQEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part16', 'scsi-SQEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419133 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.419142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0-osd--block--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0', 'dm-uuid-LVM-Fe5paP4RaHCNTyOtYUd48D6X5xKxbgN5ZF9ViuG9w6ObaWaik0UusXTfQv6Upnj5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79476f9b--63cb--5c74--926b--50a3eb682c43-osd--block--79476f9b--63cb--5c74--926b--50a3eb682c43', 'dm-uuid-LVM-lGPIba1XCmCrdedZxItRlQ5wsxJuKeX73qUmJ1hQjhylmCIxoVBMqqptLe6gyQix'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419204 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.419215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part1', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part14', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part15', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part16', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0-osd--block--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dw20gd-69Qv-eKty-yH3R-4JPQ-wBw3-2SigSk', 'scsi-0QEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4', 'scsi-SQEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--79476f9b--63cb--5c74--926b--50a3eb682c43-osd--block--79476f9b--63cb--5c74--926b--50a3eb682c43'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xyw5Vv-Jx4o-JuFH-yepn-cBG4-Zh8H-ZbIRS8', 'scsi-0QEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf', 'scsi-SQEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346', 'scsi-SQEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76a76f98--f10a--56c2--85c8--c111ab4c87c6-osd--block--76a76f98--f10a--56c2--85c8--c111ab4c87c6', 'dm-uuid-LVM-ZlF2XCDYZD1UtTLH1LhhUrb6phYn0u1WeQqWw9uj3pc9o5aJ38s0WNm1vGaeuKzj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f3fee7d3--6bcf--515f--a6c3--caef0862fd99-osd--block--f3fee7d3--6bcf--515f--a6c3--caef0862fd99', 'dm-uuid-LVM-vIAxX0t2ryPCpAFoQJVGLUBsLLDK0CquaWCihnEnaolpdeNnFEztlu7vEUNpbjy2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419518 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.419528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--76a76f98--f10a--56c2--85c8--c111ab4c87c6-osd--block--76a76f98--f10a--56c2--85c8--c111ab4c87c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WAwVmE-n97f-xQhq-5QXz-u2uN-qAiK-XLuUKK', 'scsi-0QEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2', 'scsi-SQEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f3fee7d3--6bcf--515f--a6c3--caef0862fd99-osd--block--f3fee7d3--6bcf--515f--a6c3--caef0862fd99'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ITcX7v-glWJ-T3DR-3wBI-Q3pc-2gFu-ScaBKL', 'scsi-0QEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042', 'scsi-SQEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df', 'scsi-SQEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419683 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.419697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--275f26f1--4e1c--5372--9190--a1521a972d04-osd--block--275f26f1--4e1c--5372--9190--a1521a972d04', 'dm-uuid-LVM-UfJRkDX0mNOpRn9nwFOha60VYmAVXjDX2XHEdZANKeX1Quek4W897jXn2caXrs1x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5db720f--fb16--50b5--adff--95cbe6288183-osd--block--c5db720f--fb16--50b5--adff--95cbe6288183', 'dm-uuid-LVM-e8VEXzThEhG23c1FWIDl5qgfhvlMa1sxAi5EyN7eYryES4U80WDiFO8vV4ZFfpdS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:04:11.419837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part1', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part14', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part15', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part16', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--275f26f1--4e1c--5372--9190--a1521a972d04-osd--block--275f26f1--4e1c--5372--9190--a1521a972d04'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ABygGI-gKWl-Ooen-nD1A-WWd5-W18E-XUWZuU', 'scsi-0QEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88', 'scsi-SQEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c5db720f--fb16--50b5--adff--95cbe6288183-osd--block--c5db720f--fb16--50b5--adff--95cbe6288183'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JUvZKB-Jhl3-cL3g-mivQ-F4rC-Vby2-gmCMXi', 'scsi-0QEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59', 'scsi-SQEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe', 'scsi-SQEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:04:11.419900 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.419915 | orchestrator | 2025-08-29 21:04:11.419924 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 21:04:11.419933 | orchestrator | Friday 29 August 2025 20:53:12 +0000 (0:00:01.095) 0:00:30.612 ********* 2025-08-29 21:04:11.419943 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.419952 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.419961 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.419971 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.419996 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420009 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420051 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420062 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420072 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5', 'scsi-SQEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part1', 'scsi-SQEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part14', 'scsi-SQEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part15', 'scsi-SQEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part16', 'scsi-SQEMU_QEMU_HARDDISK_d50da758-3434-48e7-b2ef-c53bd7d7b8a5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420091 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420107 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420116 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420125 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420134 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420143 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420156 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420179 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420189 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420198 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.420208 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91', 'scsi-SQEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part1', 'scsi-SQEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part14', 'scsi-SQEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part15', 'scsi-SQEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part16', 'scsi-SQEMU_QEMU_HARDDISK_14923614-9faf-46a7-928a-44af17f4ba91-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420222 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420243 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420253 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420262 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420271 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420280 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420293 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420312 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420322 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420331 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.420340 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed', 'scsi-SQEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part1', 'scsi-SQEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part14', 'scsi-SQEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part15', 'scsi-SQEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part16', 'scsi-SQEMU_QEMU_HARDDISK_40c948a9-46ff-474d-963f-02eb165645ed-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420354 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420370 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.420385 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0-osd--block--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0', 'dm-uuid-LVM-Fe5paP4RaHCNTyOtYUd48D6X5xKxbgN5ZF9ViuG9w6ObaWaik0UusXTfQv6Upnj5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79476f9b--63cb--5c74--926b--50a3eb682c43-osd--block--79476f9b--63cb--5c74--926b--50a3eb682c43', 'dm-uuid-LVM-lGPIba1XCmCrdedZxItRlQ5wsxJuKeX73qUmJ1hQjhylmCIxoVBMqqptLe6gyQix'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420405 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420444 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420454 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420473 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420488 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420507 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76a76f98--f10a--56c2--85c8--c111ab4c87c6-osd--block--76a76f98--f10a--56c2--85c8--c111ab4c87c6', 'dm-uuid-LVM-ZlF2XCDYZD1UtTLH1LhhUrb6phYn0u1WeQqWw9uj3pc9o5aJ38s0WNm1vGaeuKzj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420516 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f3fee7d3--6bcf--515f--a6c3--caef0862fd99-osd--block--f3fee7d3--6bcf--515f--a6c3--caef0862fd99', 'dm-uuid-LVM-vIAxX0t2ryPCpAFoQJVGLUBsLLDK0CquaWCihnEnaolpdeNnFEztlu7vEUNpbjy2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420568 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part1', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part14', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part15', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part16', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0-osd--block--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dw20gd-69Qv-eKty-yH3R-4JPQ-wBw3-2SigSk', 'scsi-0QEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4', 'scsi-SQEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420597 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--79476f9b--63cb--5c74--926b--50a3eb682c43-osd--block--79476f9b--63cb--5c74--926b--50a3eb682c43'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xyw5Vv-Jx4o-JuFH-yepn-cBG4-Zh8H-ZbIRS8', 'scsi-0QEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf', 'scsi-SQEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346', 'scsi-SQEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420640 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420658 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420668 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.420682 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420692 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420701 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420710 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420760 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--76a76f98--f10a--56c2--85c8--c111ab4c87c6-osd--block--76a76f98--f10a--56c2--85c8--c111ab4c87c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WAwVmE-n97f-xQhq-5QXz-u2uN-qAiK-XLuUKK', 'scsi-0QEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2', 'scsi-SQEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420770 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--275f26f1--4e1c--5372--9190--a1521a972d04-osd--block--275f26f1--4e1c--5372--9190--a1521a972d04', 'dm-uuid-LVM-UfJRkDX0mNOpRn9nwFOha60VYmAVXjDX2XHEdZANKeX1Quek4W897jXn2caXrs1x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420779 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5db720f--fb16--50b5--adff--95cbe6288183-osd--block--c5db720f--fb16--50b5--adff--95cbe6288183', 'dm-uuid-LVM-e8VEXzThEhG23c1FWIDl5qgfhvlMa1sxAi5EyN7eYryES4U80WDiFO8vV4ZFfpdS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420799 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f3fee7d3--6bcf--515f--a6c3--caef0862fd99-osd--block--f3fee7d3--6bcf--515f--a6c3--caef0862fd99'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ITcX7v-glWJ-T3DR-3wBI-Q3pc-2gFu-ScaBKL', 'scsi-0QEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042', 'scsi-SQEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420814 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df', 'scsi-SQEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420833 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420866 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.420879 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420904 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part1', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part14', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part15', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part16', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420957 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--275f26f1--4e1c--5372--9190--a1521a972d04-osd--block--275f26f1--4e1c--5372--9190--a1521a972d04'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ABygGI-gKWl-Ooen-nD1A-WWd5-W18E-XUWZuU', 'scsi-0QEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88', 'scsi-SQEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.420966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c5db720f--fb16--50b5--adff--95cbe6288183-osd--block--c5db720f--fb16--50b5--adff--95cbe6288183'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JUvZKB-Jhl3-cL3g-mivQ-F4rC-Vby2-gmCMXi', 'scsi-0QEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59', 'scsi-SQEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.421037 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe', 'scsi-SQEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.421053 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:04:11.421062 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.421071 | orchestrator | 2025-08-29 21:04:11.421080 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 21:04:11.421089 | orchestrator | Friday 29 August 2025 20:53:13 +0000 (0:00:01.002) 0:00:31.615 ********* 2025-08-29 21:04:11.421098 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.421107 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.421116 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.421292 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.421306 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.421313 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.421321 | orchestrator | 2025-08-29 21:04:11.421329 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 21:04:11.421337 | orchestrator | Friday 29 August 2025 20:53:14 +0000 (0:00:01.101) 0:00:32.717 ********* 2025-08-29 21:04:11.421345 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.421353 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.421360 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.421368 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.421376 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.421383 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.421391 | orchestrator | 2025-08-29 21:04:11.421399 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 21:04:11.421407 | orchestrator | Friday 29 August 2025 20:53:15 +0000 (0:00:00.627) 0:00:33.344 ********* 2025-08-29 21:04:11.421415 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.421423 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.421431 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.421439 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.421447 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.421454 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.421462 | orchestrator | 2025-08-29 21:04:11.421470 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 21:04:11.421484 | orchestrator | Friday 29 August 2025 20:53:15 +0000 (0:00:00.766) 0:00:34.111 ********* 2025-08-29 21:04:11.421493 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.421500 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.421508 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.421516 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.421524 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.421531 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.421539 | orchestrator | 2025-08-29 21:04:11.421547 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 21:04:11.421555 | orchestrator | Friday 29 August 2025 20:53:16 +0000 (0:00:00.743) 0:00:34.854 ********* 2025-08-29 21:04:11.421563 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.421571 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.421578 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.421586 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.421594 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.421601 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.421609 | orchestrator | 2025-08-29 21:04:11.421617 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 21:04:11.421625 | orchestrator | Friday 29 August 2025 20:53:17 +0000 (0:00:01.335) 0:00:36.190 ********* 2025-08-29 21:04:11.421633 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.421640 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.421648 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.421656 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.421664 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.421671 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.421679 | orchestrator | 2025-08-29 21:04:11.421687 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 21:04:11.421695 | orchestrator | Friday 29 August 2025 20:53:18 +0000 (0:00:00.675) 0:00:36.865 ********* 2025-08-29 21:04:11.421703 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 21:04:11.421711 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-08-29 21:04:11.421719 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-08-29 21:04:11.421726 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-08-29 21:04:11.421734 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-08-29 21:04:11.421742 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 21:04:11.421750 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 21:04:11.421758 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 21:04:11.421765 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 21:04:11.421773 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-08-29 21:04:11.421781 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 21:04:11.421789 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 21:04:11.421796 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 21:04:11.421804 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 21:04:11.421812 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 21:04:11.421819 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-08-29 21:04:11.421827 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 21:04:11.421835 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 21:04:11.421842 | orchestrator | 2025-08-29 21:04:11.421850 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 21:04:11.421858 | orchestrator | Friday 29 August 2025 20:53:21 +0000 (0:00:02.970) 0:00:39.836 ********* 2025-08-29 21:04:11.421866 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 21:04:11.421879 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 21:04:11.421892 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 21:04:11.421901 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.421910 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 21:04:11.421919 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 21:04:11.421929 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 21:04:11.421938 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.421947 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 21:04:11.421955 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 21:04:11.421965 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 21:04:11.421974 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.422002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 21:04:11.422012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 21:04:11.422058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 21:04:11.422068 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.422076 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 21:04:11.422085 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 21:04:11.422094 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 21:04:11.422103 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.422112 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 21:04:11.422121 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 21:04:11.422129 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 21:04:11.422138 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.422147 | orchestrator | 2025-08-29 21:04:11.422157 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 21:04:11.422165 | orchestrator | Friday 29 August 2025 20:53:22 +0000 (0:00:01.161) 0:00:40.997 ********* 2025-08-29 21:04:11.422173 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.422181 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.422189 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.422197 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.422205 | orchestrator | 2025-08-29 21:04:11.422213 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 21:04:11.422222 | orchestrator | Friday 29 August 2025 20:53:24 +0000 (0:00:01.405) 0:00:42.403 ********* 2025-08-29 21:04:11.422230 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.422238 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.422246 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.422254 | orchestrator | 2025-08-29 21:04:11.422262 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 21:04:11.422270 | orchestrator | Friday 29 August 2025 20:53:24 +0000 (0:00:00.288) 0:00:42.691 ********* 2025-08-29 21:04:11.422278 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.422285 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.422293 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.422301 | orchestrator | 2025-08-29 21:04:11.422308 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 21:04:11.422316 | orchestrator | Friday 29 August 2025 20:53:25 +0000 (0:00:00.595) 0:00:43.286 ********* 2025-08-29 21:04:11.422324 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.422332 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.422340 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.422348 | orchestrator | 2025-08-29 21:04:11.422356 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 21:04:11.422363 | orchestrator | Friday 29 August 2025 20:53:25 +0000 (0:00:00.458) 0:00:43.745 ********* 2025-08-29 21:04:11.422377 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.422385 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.422393 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.422400 | orchestrator | 2025-08-29 21:04:11.422408 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 21:04:11.422416 | orchestrator | Friday 29 August 2025 20:53:26 +0000 (0:00:00.540) 0:00:44.285 ********* 2025-08-29 21:04:11.422424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:04:11.422432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:04:11.422439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:04:11.422447 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.422455 | orchestrator | 2025-08-29 21:04:11.422463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 21:04:11.422470 | orchestrator | Friday 29 August 2025 20:53:26 +0000 (0:00:00.496) 0:00:44.782 ********* 2025-08-29 21:04:11.422478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:04:11.422486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:04:11.422494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:04:11.422501 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.422509 | orchestrator | 2025-08-29 21:04:11.422517 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 21:04:11.422525 | orchestrator | Friday 29 August 2025 20:53:26 +0000 (0:00:00.413) 0:00:45.196 ********* 2025-08-29 21:04:11.422533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:04:11.422540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:04:11.422548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:04:11.422556 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.422564 | orchestrator | 2025-08-29 21:04:11.422575 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 21:04:11.422583 | orchestrator | Friday 29 August 2025 20:53:27 +0000 (0:00:00.320) 0:00:45.516 ********* 2025-08-29 21:04:11.422591 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.422599 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.422607 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.422615 | orchestrator | 2025-08-29 21:04:11.422622 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 21:04:11.422630 | orchestrator | Friday 29 August 2025 20:53:27 +0000 (0:00:00.563) 0:00:46.079 ********* 2025-08-29 21:04:11.422638 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 21:04:11.422646 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 21:04:11.422654 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 21:04:11.422661 | orchestrator | 2025-08-29 21:04:11.422669 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 21:04:11.422677 | orchestrator | Friday 29 August 2025 20:53:29 +0000 (0:00:01.269) 0:00:47.350 ********* 2025-08-29 21:04:11.422695 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 21:04:11.422704 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 21:04:11.422712 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 21:04:11.422720 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-08-29 21:04:11.422727 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 21:04:11.422735 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 21:04:11.422743 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 21:04:11.422751 | orchestrator | 2025-08-29 21:04:11.422759 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 21:04:11.422775 | orchestrator | Friday 29 August 2025 20:53:30 +0000 (0:00:00.909) 0:00:48.259 ********* 2025-08-29 21:04:11.422784 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 21:04:11.422791 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 21:04:11.422816 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 21:04:11.422824 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-08-29 21:04:11.422832 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 21:04:11.422840 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 21:04:11.422848 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 21:04:11.422855 | orchestrator | 2025-08-29 21:04:11.422863 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 21:04:11.422871 | orchestrator | Friday 29 August 2025 20:53:31 +0000 (0:00:01.807) 0:00:50.067 ********* 2025-08-29 21:04:11.422879 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.422888 | orchestrator | 2025-08-29 21:04:11.422896 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 21:04:11.422904 | orchestrator | Friday 29 August 2025 20:53:33 +0000 (0:00:01.193) 0:00:51.261 ********* 2025-08-29 21:04:11.422912 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.422920 | orchestrator | 2025-08-29 21:04:11.422927 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 21:04:11.422935 | orchestrator | Friday 29 August 2025 20:53:34 +0000 (0:00:01.091) 0:00:52.352 ********* 2025-08-29 21:04:11.422943 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.422951 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.422958 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.422966 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.422974 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.422998 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.423006 | orchestrator | 2025-08-29 21:04:11.423014 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 21:04:11.423022 | orchestrator | Friday 29 August 2025 20:53:34 +0000 (0:00:00.795) 0:00:53.148 ********* 2025-08-29 21:04:11.423030 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.423037 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.423045 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.423053 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.423061 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.423068 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.423076 | orchestrator | 2025-08-29 21:04:11.423084 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 21:04:11.423091 | orchestrator | Friday 29 August 2025 20:53:36 +0000 (0:00:01.270) 0:00:54.418 ********* 2025-08-29 21:04:11.423099 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.423107 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.423115 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.423122 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.423130 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.423138 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.423146 | orchestrator | 2025-08-29 21:04:11.423153 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 21:04:11.423161 | orchestrator | Friday 29 August 2025 20:53:37 +0000 (0:00:01.116) 0:00:55.536 ********* 2025-08-29 21:04:11.423169 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.423182 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.423194 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.423202 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.423210 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.423218 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.423225 | orchestrator | 2025-08-29 21:04:11.423233 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 21:04:11.423241 | orchestrator | Friday 29 August 2025 20:53:38 +0000 (0:00:01.160) 0:00:56.696 ********* 2025-08-29 21:04:11.423249 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.423257 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.423264 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.423272 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.423280 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.423288 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.423295 | orchestrator | 2025-08-29 21:04:11.423303 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 21:04:11.423311 | orchestrator | Friday 29 August 2025 20:53:39 +0000 (0:00:01.034) 0:00:57.730 ********* 2025-08-29 21:04:11.423323 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.423332 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.423340 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.423347 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.423355 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.423363 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.423371 | orchestrator | 2025-08-29 21:04:11.423379 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 21:04:11.423387 | orchestrator | Friday 29 August 2025 20:53:40 +0000 (0:00:00.575) 0:00:58.305 ********* 2025-08-29 21:04:11.423395 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.423403 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.423411 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.423419 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.423426 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.423434 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.423442 | orchestrator | 2025-08-29 21:04:11.423450 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 21:04:11.423458 | orchestrator | Friday 29 August 2025 20:53:40 +0000 (0:00:00.669) 0:00:58.975 ********* 2025-08-29 21:04:11.423465 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.423473 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.423481 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.423489 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.423497 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.423505 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.423513 | orchestrator | 2025-08-29 21:04:11.423520 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 21:04:11.423528 | orchestrator | Friday 29 August 2025 20:53:41 +0000 (0:00:01.173) 0:01:00.148 ********* 2025-08-29 21:04:11.423536 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.423544 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.423552 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.423559 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.423567 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.423575 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.423583 | orchestrator | 2025-08-29 21:04:11.423591 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 21:04:11.423599 | orchestrator | Friday 29 August 2025 20:53:43 +0000 (0:00:01.360) 0:01:01.509 ********* 2025-08-29 21:04:11.423607 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.423615 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.423622 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.423630 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.423638 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.423646 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.423659 | orchestrator | 2025-08-29 21:04:11.423667 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 21:04:11.423675 | orchestrator | Friday 29 August 2025 20:53:43 +0000 (0:00:00.549) 0:01:02.059 ********* 2025-08-29 21:04:11.423683 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.423691 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.423699 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.423707 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.423715 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.423723 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.423731 | orchestrator | 2025-08-29 21:04:11.423738 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 21:04:11.423746 | orchestrator | Friday 29 August 2025 20:53:44 +0000 (0:00:01.110) 0:01:03.169 ********* 2025-08-29 21:04:11.423754 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.423762 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.423770 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.423778 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.423786 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.423794 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.423802 | orchestrator | 2025-08-29 21:04:11.423809 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 21:04:11.423817 | orchestrator | Friday 29 August 2025 20:53:45 +0000 (0:00:00.855) 0:01:04.025 ********* 2025-08-29 21:04:11.423825 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.423833 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.423841 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.423849 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.423856 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.423864 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.423872 | orchestrator | 2025-08-29 21:04:11.423880 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 21:04:11.423888 | orchestrator | Friday 29 August 2025 20:53:47 +0000 (0:00:01.242) 0:01:05.268 ********* 2025-08-29 21:04:11.423896 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.423904 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.423912 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.423919 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.423927 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.423935 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.423943 | orchestrator | 2025-08-29 21:04:11.423951 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 21:04:11.423958 | orchestrator | Friday 29 August 2025 20:53:47 +0000 (0:00:00.633) 0:01:05.901 ********* 2025-08-29 21:04:11.423966 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.423974 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.423999 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.424007 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.424015 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.424022 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.424030 | orchestrator | 2025-08-29 21:04:11.424038 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 21:04:11.424046 | orchestrator | Friday 29 August 2025 20:53:48 +0000 (0:00:00.796) 0:01:06.697 ********* 2025-08-29 21:04:11.424054 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.424062 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.424070 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.424077 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.424085 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.424093 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.424101 | orchestrator | 2025-08-29 21:04:11.424108 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 21:04:11.424120 | orchestrator | Friday 29 August 2025 20:53:49 +0000 (0:00:00.704) 0:01:07.401 ********* 2025-08-29 21:04:11.424133 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.424141 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.424149 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.424157 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.424165 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.424172 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.424180 | orchestrator | 2025-08-29 21:04:11.424188 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 21:04:11.424196 | orchestrator | Friday 29 August 2025 20:53:50 +0000 (0:00:00.852) 0:01:08.254 ********* 2025-08-29 21:04:11.424204 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.424211 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.424219 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.424227 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.424235 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.424242 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.424250 | orchestrator | 2025-08-29 21:04:11.424258 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 21:04:11.424266 | orchestrator | Friday 29 August 2025 20:53:51 +0000 (0:00:00.963) 0:01:09.217 ********* 2025-08-29 21:04:11.424274 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.424281 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.424289 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.424297 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.424304 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.424312 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.424320 | orchestrator | 2025-08-29 21:04:11.424328 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-08-29 21:04:11.424336 | orchestrator | Friday 29 August 2025 20:53:52 +0000 (0:00:01.510) 0:01:10.728 ********* 2025-08-29 21:04:11.424344 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.424352 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.424359 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.424367 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.424375 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.424383 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.424390 | orchestrator | 2025-08-29 21:04:11.424398 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-08-29 21:04:11.424406 | orchestrator | Friday 29 August 2025 20:53:54 +0000 (0:00:01.632) 0:01:12.361 ********* 2025-08-29 21:04:11.424414 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.424422 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.424429 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.424437 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.424445 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.424452 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.424460 | orchestrator | 2025-08-29 21:04:11.424468 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-08-29 21:04:11.424476 | orchestrator | Friday 29 August 2025 20:53:56 +0000 (0:00:02.020) 0:01:14.381 ********* 2025-08-29 21:04:11.424484 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.424492 | orchestrator | 2025-08-29 21:04:11.424499 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-08-29 21:04:11.424507 | orchestrator | Friday 29 August 2025 20:53:57 +0000 (0:00:00.955) 0:01:15.337 ********* 2025-08-29 21:04:11.424515 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.424523 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.424531 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.424538 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.424546 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.424554 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.424561 | orchestrator | 2025-08-29 21:04:11.424569 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-08-29 21:04:11.424582 | orchestrator | Friday 29 August 2025 20:53:57 +0000 (0:00:00.622) 0:01:15.959 ********* 2025-08-29 21:04:11.424590 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.424597 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.424605 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.424613 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.424621 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.424628 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.424636 | orchestrator | 2025-08-29 21:04:11.424644 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-08-29 21:04:11.424652 | orchestrator | Friday 29 August 2025 20:53:58 +0000 (0:00:00.515) 0:01:16.475 ********* 2025-08-29 21:04:11.424660 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 21:04:11.424668 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 21:04:11.424675 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 21:04:11.424683 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 21:04:11.424695 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 21:04:11.424703 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 21:04:11.424710 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 21:04:11.424718 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 21:04:11.424726 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 21:04:11.424734 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 21:04:11.424741 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 21:04:11.424749 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 21:04:11.424757 | orchestrator | 2025-08-29 21:04:11.424768 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-08-29 21:04:11.424777 | orchestrator | Friday 29 August 2025 20:53:59 +0000 (0:00:01.401) 0:01:17.876 ********* 2025-08-29 21:04:11.424784 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.424792 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.424800 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.424808 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.424816 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.424824 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.424831 | orchestrator | 2025-08-29 21:04:11.424839 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-08-29 21:04:11.424847 | orchestrator | Friday 29 August 2025 20:54:00 +0000 (0:00:00.823) 0:01:18.700 ********* 2025-08-29 21:04:11.424855 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.424863 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.424870 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.424878 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.424886 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.424894 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.424901 | orchestrator | 2025-08-29 21:04:11.424909 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-08-29 21:04:11.424917 | orchestrator | Friday 29 August 2025 20:54:01 +0000 (0:00:00.610) 0:01:19.311 ********* 2025-08-29 21:04:11.424925 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.424933 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.424941 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.424948 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.424956 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.424972 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.424993 | orchestrator | 2025-08-29 21:04:11.425001 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-08-29 21:04:11.425009 | orchestrator | Friday 29 August 2025 20:54:01 +0000 (0:00:00.532) 0:01:19.843 ********* 2025-08-29 21:04:11.425017 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.425025 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.425032 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.425040 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.425048 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.425056 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.425063 | orchestrator | 2025-08-29 21:04:11.425071 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-08-29 21:04:11.425079 | orchestrator | Friday 29 August 2025 20:54:02 +0000 (0:00:00.595) 0:01:20.439 ********* 2025-08-29 21:04:11.425087 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.425095 | orchestrator | 2025-08-29 21:04:11.425103 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-08-29 21:04:11.425111 | orchestrator | Friday 29 August 2025 20:54:03 +0000 (0:00:00.980) 0:01:21.420 ********* 2025-08-29 21:04:11.425118 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.425126 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.425134 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.425142 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.425149 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.425157 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.425165 | orchestrator | 2025-08-29 21:04:11.425173 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-08-29 21:04:11.425180 | orchestrator | Friday 29 August 2025 20:55:37 +0000 (0:01:34.045) 0:02:55.465 ********* 2025-08-29 21:04:11.425188 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 21:04:11.425196 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 21:04:11.425204 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 21:04:11.425212 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.425220 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 21:04:11.425227 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 21:04:11.425235 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 21:04:11.425243 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.425251 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 21:04:11.425259 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 21:04:11.425267 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 21:04:11.425274 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.425282 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 21:04:11.425294 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 21:04:11.425302 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 21:04:11.425310 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.425317 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 21:04:11.425325 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 21:04:11.425333 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 21:04:11.425341 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.425354 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 21:04:11.425362 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 21:04:11.425370 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 21:04:11.425382 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.425390 | orchestrator | 2025-08-29 21:04:11.425398 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-08-29 21:04:11.425406 | orchestrator | Friday 29 August 2025 20:55:37 +0000 (0:00:00.661) 0:02:56.126 ********* 2025-08-29 21:04:11.425413 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.425421 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.425429 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.425437 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.425444 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.425452 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.425460 | orchestrator | 2025-08-29 21:04:11.425468 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-08-29 21:04:11.425476 | orchestrator | Friday 29 August 2025 20:55:38 +0000 (0:00:00.509) 0:02:56.636 ********* 2025-08-29 21:04:11.425484 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.425492 | orchestrator | 2025-08-29 21:04:11.425499 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-08-29 21:04:11.425507 | orchestrator | Friday 29 August 2025 20:55:38 +0000 (0:00:00.165) 0:02:56.802 ********* 2025-08-29 21:04:11.425515 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.425523 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.425530 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.425538 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.425546 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.425554 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.425562 | orchestrator | 2025-08-29 21:04:11.425569 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-08-29 21:04:11.425577 | orchestrator | Friday 29 August 2025 20:55:39 +0000 (0:00:00.803) 0:02:57.605 ********* 2025-08-29 21:04:11.425585 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.425593 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.425601 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.425609 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.425616 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.425624 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.425632 | orchestrator | 2025-08-29 21:04:11.425640 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-08-29 21:04:11.425647 | orchestrator | Friday 29 August 2025 20:55:39 +0000 (0:00:00.546) 0:02:58.152 ********* 2025-08-29 21:04:11.425655 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.425663 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.425671 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.425679 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.425686 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.425694 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.425702 | orchestrator | 2025-08-29 21:04:11.425710 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-08-29 21:04:11.425718 | orchestrator | Friday 29 August 2025 20:55:40 +0000 (0:00:00.674) 0:02:58.827 ********* 2025-08-29 21:04:11.425726 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.425734 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.425741 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.425749 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.425757 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.425765 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.425772 | orchestrator | 2025-08-29 21:04:11.425780 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-08-29 21:04:11.425793 | orchestrator | Friday 29 August 2025 20:55:42 +0000 (0:00:02.364) 0:03:01.191 ********* 2025-08-29 21:04:11.425801 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.425809 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.425817 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.425824 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.425832 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.425840 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.425847 | orchestrator | 2025-08-29 21:04:11.425855 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-08-29 21:04:11.425863 | orchestrator | Friday 29 August 2025 20:55:43 +0000 (0:00:00.780) 0:03:01.971 ********* 2025-08-29 21:04:11.425871 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.425880 | orchestrator | 2025-08-29 21:04:11.425888 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-08-29 21:04:11.425896 | orchestrator | Friday 29 August 2025 20:55:45 +0000 (0:00:01.396) 0:03:03.368 ********* 2025-08-29 21:04:11.425904 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.425911 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.425919 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.425927 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.425935 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.425942 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.425950 | orchestrator | 2025-08-29 21:04:11.425958 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-08-29 21:04:11.425966 | orchestrator | Friday 29 August 2025 20:55:45 +0000 (0:00:00.636) 0:03:04.005 ********* 2025-08-29 21:04:11.426046 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.426059 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.426067 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.426075 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.426083 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.426091 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.426099 | orchestrator | 2025-08-29 21:04:11.426107 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-08-29 21:04:11.426115 | orchestrator | Friday 29 August 2025 20:55:46 +0000 (0:00:00.772) 0:03:04.778 ********* 2025-08-29 21:04:11.426123 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.426131 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.426139 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.426146 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.426154 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.426162 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.426170 | orchestrator | 2025-08-29 21:04:11.426178 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-08-29 21:04:11.426196 | orchestrator | Friday 29 August 2025 20:55:47 +0000 (0:00:00.951) 0:03:05.730 ********* 2025-08-29 21:04:11.426204 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.426212 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.426220 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.426228 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.426236 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.426244 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.426252 | orchestrator | 2025-08-29 21:04:11.426260 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-08-29 21:04:11.426267 | orchestrator | Friday 29 August 2025 20:55:48 +0000 (0:00:00.776) 0:03:06.506 ********* 2025-08-29 21:04:11.426275 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.426283 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.426290 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.426297 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.426304 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.426316 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.426323 | orchestrator | 2025-08-29 21:04:11.426329 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-08-29 21:04:11.426336 | orchestrator | Friday 29 August 2025 20:55:48 +0000 (0:00:00.489) 0:03:06.996 ********* 2025-08-29 21:04:11.426343 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.426349 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.426356 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.426362 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.426369 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.426376 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.426382 | orchestrator | 2025-08-29 21:04:11.426389 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-08-29 21:04:11.426396 | orchestrator | Friday 29 August 2025 20:55:49 +0000 (0:00:00.827) 0:03:07.823 ********* 2025-08-29 21:04:11.426402 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.426409 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.426415 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.426422 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.426429 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.426435 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.426442 | orchestrator | 2025-08-29 21:04:11.426449 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-08-29 21:04:11.426455 | orchestrator | Friday 29 August 2025 20:55:50 +0000 (0:00:00.616) 0:03:08.440 ********* 2025-08-29 21:04:11.426462 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.426469 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.426475 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.426482 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.426488 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.426495 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.426501 | orchestrator | 2025-08-29 21:04:11.426508 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-08-29 21:04:11.426515 | orchestrator | Friday 29 August 2025 20:55:51 +0000 (0:00:00.848) 0:03:09.288 ********* 2025-08-29 21:04:11.426521 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.426528 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.426535 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.426541 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.426548 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.426554 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.426561 | orchestrator | 2025-08-29 21:04:11.426568 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-08-29 21:04:11.426574 | orchestrator | Friday 29 August 2025 20:55:52 +0000 (0:00:01.356) 0:03:10.645 ********* 2025-08-29 21:04:11.426581 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.426588 | orchestrator | 2025-08-29 21:04:11.426595 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-08-29 21:04:11.426601 | orchestrator | Friday 29 August 2025 20:55:53 +0000 (0:00:01.223) 0:03:11.869 ********* 2025-08-29 21:04:11.426608 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-08-29 21:04:11.426615 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-08-29 21:04:11.426621 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-08-29 21:04:11.426628 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-08-29 21:04:11.426635 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-08-29 21:04:11.426641 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-08-29 21:04:11.426648 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-08-29 21:04:11.426654 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-08-29 21:04:11.426661 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-08-29 21:04:11.426672 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-08-29 21:04:11.426679 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-08-29 21:04:11.426691 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-08-29 21:04:11.426698 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-08-29 21:04:11.426705 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-08-29 21:04:11.426712 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-08-29 21:04:11.426718 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-08-29 21:04:11.426725 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-08-29 21:04:11.426732 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-08-29 21:04:11.426738 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-08-29 21:04:11.426745 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-08-29 21:04:11.426752 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-08-29 21:04:11.426762 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-08-29 21:04:11.426769 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-08-29 21:04:11.426776 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-08-29 21:04:11.426782 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-08-29 21:04:11.426789 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-08-29 21:04:11.426796 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-08-29 21:04:11.426802 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-08-29 21:04:11.426809 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-08-29 21:04:11.426815 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-08-29 21:04:11.426822 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-08-29 21:04:11.426829 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-08-29 21:04:11.426835 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-08-29 21:04:11.426842 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-08-29 21:04:11.426849 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-08-29 21:04:11.426855 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-08-29 21:04:11.426862 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-08-29 21:04:11.426869 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-08-29 21:04:11.426875 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-08-29 21:04:11.426882 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-08-29 21:04:11.426888 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-08-29 21:04:11.426895 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-08-29 21:04:11.426902 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-08-29 21:04:11.426908 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-08-29 21:04:11.426915 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 21:04:11.426922 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 21:04:11.426928 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-08-29 21:04:11.426935 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-08-29 21:04:11.426942 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-08-29 21:04:11.426948 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 21:04:11.426955 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 21:04:11.426966 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-08-29 21:04:11.426972 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 21:04:11.426992 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 21:04:11.426999 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 21:04:11.427006 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 21:04:11.427013 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 21:04:11.427019 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 21:04:11.427026 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 21:04:11.427033 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 21:04:11.427039 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 21:04:11.427046 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 21:04:11.427053 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 21:04:11.427059 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 21:04:11.427066 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 21:04:11.427072 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 21:04:11.427079 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 21:04:11.427086 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 21:04:11.427092 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 21:04:11.427099 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 21:04:11.427108 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 21:04:11.427115 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 21:04:11.427122 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 21:04:11.427129 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 21:04:11.427135 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 21:04:11.427142 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 21:04:11.427148 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 21:04:11.427155 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 21:04:11.427162 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-08-29 21:04:11.427172 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-08-29 21:04:11.427179 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 21:04:11.427186 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 21:04:11.427193 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 21:04:11.427200 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-08-29 21:04:11.427206 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 21:04:11.427213 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-08-29 21:04:11.427220 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 21:04:11.427226 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-08-29 21:04:11.427233 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 21:04:11.427240 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-08-29 21:04:11.427246 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-08-29 21:04:11.427258 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-08-29 21:04:11.427264 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-08-29 21:04:11.427271 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-08-29 21:04:11.427278 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-08-29 21:04:11.427284 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-08-29 21:04:11.427291 | orchestrator | 2025-08-29 21:04:11.427298 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-08-29 21:04:11.427305 | orchestrator | Friday 29 August 2025 20:56:00 +0000 (0:00:07.190) 0:03:19.060 ********* 2025-08-29 21:04:11.427312 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.427318 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.427325 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.427332 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.427339 | orchestrator | 2025-08-29 21:04:11.427346 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-08-29 21:04:11.427352 | orchestrator | Friday 29 August 2025 20:56:01 +0000 (0:00:00.821) 0:03:19.882 ********* 2025-08-29 21:04:11.427359 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.427366 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.427373 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.427380 | orchestrator | 2025-08-29 21:04:11.427387 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-08-29 21:04:11.427393 | orchestrator | Friday 29 August 2025 20:56:02 +0000 (0:00:00.847) 0:03:20.729 ********* 2025-08-29 21:04:11.427400 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.427407 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.427414 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.427420 | orchestrator | 2025-08-29 21:04:11.427427 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-08-29 21:04:11.427434 | orchestrator | Friday 29 August 2025 20:56:03 +0000 (0:00:01.252) 0:03:21.982 ********* 2025-08-29 21:04:11.427440 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.427447 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.427454 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.427461 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.427467 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.427474 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.427481 | orchestrator | 2025-08-29 21:04:11.427487 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-08-29 21:04:11.427494 | orchestrator | Friday 29 August 2025 20:56:04 +0000 (0:00:00.762) 0:03:22.744 ********* 2025-08-29 21:04:11.427501 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.427508 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.427514 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.427521 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.427528 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.427537 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.427544 | orchestrator | 2025-08-29 21:04:11.427551 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-08-29 21:04:11.427558 | orchestrator | Friday 29 August 2025 20:56:05 +0000 (0:00:00.686) 0:03:23.430 ********* 2025-08-29 21:04:11.427568 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.427575 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.427582 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.427588 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.427595 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.427602 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.427608 | orchestrator | 2025-08-29 21:04:11.427615 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-08-29 21:04:11.427622 | orchestrator | Friday 29 August 2025 20:56:06 +0000 (0:00:01.188) 0:03:24.619 ********* 2025-08-29 21:04:11.427629 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.427635 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.427645 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.427652 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.427658 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.427665 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.427672 | orchestrator | 2025-08-29 21:04:11.427678 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-08-29 21:04:11.427685 | orchestrator | Friday 29 August 2025 20:56:06 +0000 (0:00:00.575) 0:03:25.195 ********* 2025-08-29 21:04:11.427692 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.427699 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.427705 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.427712 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.427718 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.427725 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.427731 | orchestrator | 2025-08-29 21:04:11.427738 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-08-29 21:04:11.427745 | orchestrator | Friday 29 August 2025 20:56:07 +0000 (0:00:00.774) 0:03:25.969 ********* 2025-08-29 21:04:11.427752 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.427758 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.427765 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.427771 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.427778 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.427784 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.427791 | orchestrator | 2025-08-29 21:04:11.427798 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-08-29 21:04:11.427805 | orchestrator | Friday 29 August 2025 20:56:08 +0000 (0:00:00.598) 0:03:26.568 ********* 2025-08-29 21:04:11.427811 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.427818 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.427825 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.427831 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.427838 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.427844 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.427851 | orchestrator | 2025-08-29 21:04:11.427858 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-08-29 21:04:11.427865 | orchestrator | Friday 29 August 2025 20:56:09 +0000 (0:00:00.920) 0:03:27.488 ********* 2025-08-29 21:04:11.427871 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.427878 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.427884 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.427891 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.427898 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.427904 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.427911 | orchestrator | 2025-08-29 21:04:11.427918 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-08-29 21:04:11.427924 | orchestrator | Friday 29 August 2025 20:56:10 +0000 (0:00:01.041) 0:03:28.530 ********* 2025-08-29 21:04:11.427931 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.427938 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.427948 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.427955 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.427962 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.427969 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.428018 | orchestrator | 2025-08-29 21:04:11.428025 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-08-29 21:04:11.428032 | orchestrator | Friday 29 August 2025 20:56:14 +0000 (0:00:03.904) 0:03:32.434 ********* 2025-08-29 21:04:11.428039 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428045 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428052 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428059 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.428065 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.428072 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.428078 | orchestrator | 2025-08-29 21:04:11.428085 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-08-29 21:04:11.428092 | orchestrator | Friday 29 August 2025 20:56:14 +0000 (0:00:00.655) 0:03:33.089 ********* 2025-08-29 21:04:11.428098 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428105 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428112 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428118 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.428125 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.428132 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.428138 | orchestrator | 2025-08-29 21:04:11.428145 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-08-29 21:04:11.428151 | orchestrator | Friday 29 August 2025 20:56:15 +0000 (0:00:01.089) 0:03:34.179 ********* 2025-08-29 21:04:11.428158 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428165 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428171 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428178 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.428184 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.428191 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.428197 | orchestrator | 2025-08-29 21:04:11.428204 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-08-29 21:04:11.428214 | orchestrator | Friday 29 August 2025 20:56:16 +0000 (0:00:00.781) 0:03:34.960 ********* 2025-08-29 21:04:11.428221 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428227 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428234 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428241 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.428247 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.428254 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.428261 | orchestrator | 2025-08-29 21:04:11.428268 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-08-29 21:04:11.428278 | orchestrator | Friday 29 August 2025 20:56:17 +0000 (0:00:01.209) 0:03:36.170 ********* 2025-08-29 21:04:11.428285 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428291 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428298 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428305 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-08-29 21:04:11.428314 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-08-29 21:04:11.428326 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-08-29 21:04:11.428333 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-08-29 21:04:11.428340 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.428346 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.428353 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-08-29 21:04:11.428360 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-08-29 21:04:11.428367 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.428374 | orchestrator | 2025-08-29 21:04:11.428380 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-08-29 21:04:11.428387 | orchestrator | Friday 29 August 2025 20:56:18 +0000 (0:00:00.907) 0:03:37.077 ********* 2025-08-29 21:04:11.428394 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428400 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428407 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428413 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.428420 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.428427 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.428433 | orchestrator | 2025-08-29 21:04:11.428440 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-08-29 21:04:11.428447 | orchestrator | Friday 29 August 2025 20:56:19 +0000 (0:00:00.911) 0:03:37.989 ********* 2025-08-29 21:04:11.428453 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428460 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428467 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428473 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.428480 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.428486 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.428493 | orchestrator | 2025-08-29 21:04:11.428500 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 21:04:11.428507 | orchestrator | Friday 29 August 2025 20:56:20 +0000 (0:00:00.543) 0:03:38.533 ********* 2025-08-29 21:04:11.428513 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428520 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428527 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428533 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.428540 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.428546 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.428553 | orchestrator | 2025-08-29 21:04:11.428563 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 21:04:11.428570 | orchestrator | Friday 29 August 2025 20:56:21 +0000 (0:00:00.964) 0:03:39.497 ********* 2025-08-29 21:04:11.428576 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428587 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428594 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428601 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.428607 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.428614 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.428620 | orchestrator | 2025-08-29 21:04:11.428627 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 21:04:11.428634 | orchestrator | Friday 29 August 2025 20:56:22 +0000 (0:00:00.718) 0:03:40.215 ********* 2025-08-29 21:04:11.428640 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428647 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428654 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428664 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.428671 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.428677 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.428684 | orchestrator | 2025-08-29 21:04:11.428691 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 21:04:11.428697 | orchestrator | Friday 29 August 2025 20:56:22 +0000 (0:00:00.646) 0:03:40.862 ********* 2025-08-29 21:04:11.428704 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428711 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428717 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428724 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.428731 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.428737 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.428744 | orchestrator | 2025-08-29 21:04:11.428751 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 21:04:11.428758 | orchestrator | Friday 29 August 2025 20:56:23 +0000 (0:00:00.835) 0:03:41.701 ********* 2025-08-29 21:04:11.428764 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 21:04:11.428771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 21:04:11.428778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 21:04:11.428784 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428791 | orchestrator | 2025-08-29 21:04:11.428798 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 21:04:11.428805 | orchestrator | Friday 29 August 2025 20:56:24 +0000 (0:00:00.572) 0:03:42.274 ********* 2025-08-29 21:04:11.428811 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 21:04:11.428818 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 21:04:11.428825 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 21:04:11.428831 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428838 | orchestrator | 2025-08-29 21:04:11.428845 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 21:04:11.428851 | orchestrator | Friday 29 August 2025 20:56:24 +0000 (0:00:00.512) 0:03:42.787 ********* 2025-08-29 21:04:11.428858 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 21:04:11.428865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 21:04:11.428871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 21:04:11.428878 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428885 | orchestrator | 2025-08-29 21:04:11.428892 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 21:04:11.428898 | orchestrator | Friday 29 August 2025 20:56:25 +0000 (0:00:00.537) 0:03:43.325 ********* 2025-08-29 21:04:11.428905 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.428912 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.428918 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.428925 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.428932 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.428938 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.428945 | orchestrator | 2025-08-29 21:04:11.428956 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 21:04:11.428962 | orchestrator | Friday 29 August 2025 20:56:25 +0000 (0:00:00.576) 0:03:43.901 ********* 2025-08-29 21:04:11.428969 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-08-29 21:04:11.429008 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.429017 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-08-29 21:04:11.429024 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.429030 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-08-29 21:04:11.429037 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.429044 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 21:04:11.429050 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 21:04:11.429057 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 21:04:11.429064 | orchestrator | 2025-08-29 21:04:11.429070 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-08-29 21:04:11.429077 | orchestrator | Friday 29 August 2025 20:56:27 +0000 (0:00:01.772) 0:03:45.674 ********* 2025-08-29 21:04:11.429084 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.429090 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.429097 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.429104 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.429110 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.429117 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.429124 | orchestrator | 2025-08-29 21:04:11.429130 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 21:04:11.429137 | orchestrator | Friday 29 August 2025 20:56:29 +0000 (0:00:02.341) 0:03:48.015 ********* 2025-08-29 21:04:11.429143 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.429150 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.429157 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.429163 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.429170 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.429176 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.429182 | orchestrator | 2025-08-29 21:04:11.429188 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 21:04:11.429197 | orchestrator | Friday 29 August 2025 20:56:30 +0000 (0:00:00.950) 0:03:48.966 ********* 2025-08-29 21:04:11.429204 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429210 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.429216 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.429222 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.429229 | orchestrator | 2025-08-29 21:04:11.429235 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 21:04:11.429241 | orchestrator | Friday 29 August 2025 20:56:31 +0000 (0:00:00.840) 0:03:49.807 ********* 2025-08-29 21:04:11.429247 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.429253 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.429259 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.429265 | orchestrator | 2025-08-29 21:04:11.429272 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 21:04:11.429281 | orchestrator | Friday 29 August 2025 20:56:31 +0000 (0:00:00.279) 0:03:50.086 ********* 2025-08-29 21:04:11.429288 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.429294 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.429300 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.429306 | orchestrator | 2025-08-29 21:04:11.429312 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 21:04:11.429318 | orchestrator | Friday 29 August 2025 20:56:33 +0000 (0:00:01.165) 0:03:51.251 ********* 2025-08-29 21:04:11.429325 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 21:04:11.429331 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 21:04:11.429337 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 21:04:11.429347 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.429353 | orchestrator | 2025-08-29 21:04:11.429359 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 21:04:11.429365 | orchestrator | Friday 29 August 2025 20:56:33 +0000 (0:00:00.687) 0:03:51.939 ********* 2025-08-29 21:04:11.429371 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.429377 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.429384 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.429390 | orchestrator | 2025-08-29 21:04:11.429396 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 21:04:11.429402 | orchestrator | Friday 29 August 2025 20:56:34 +0000 (0:00:00.459) 0:03:52.398 ********* 2025-08-29 21:04:11.429408 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.429414 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.429421 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.429427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.429433 | orchestrator | 2025-08-29 21:04:11.429439 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 21:04:11.429445 | orchestrator | Friday 29 August 2025 20:56:34 +0000 (0:00:00.731) 0:03:53.129 ********* 2025-08-29 21:04:11.429452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:04:11.429458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:04:11.429464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:04:11.429470 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429476 | orchestrator | 2025-08-29 21:04:11.429482 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 21:04:11.429489 | orchestrator | Friday 29 August 2025 20:56:35 +0000 (0:00:00.482) 0:03:53.612 ********* 2025-08-29 21:04:11.429495 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429501 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.429507 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.429513 | orchestrator | 2025-08-29 21:04:11.429519 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 21:04:11.429525 | orchestrator | Friday 29 August 2025 20:56:35 +0000 (0:00:00.470) 0:03:54.082 ********* 2025-08-29 21:04:11.429532 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429538 | orchestrator | 2025-08-29 21:04:11.429544 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 21:04:11.429550 | orchestrator | Friday 29 August 2025 20:56:36 +0000 (0:00:00.192) 0:03:54.274 ********* 2025-08-29 21:04:11.429556 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.429562 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429568 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.429575 | orchestrator | 2025-08-29 21:04:11.429581 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 21:04:11.429587 | orchestrator | Friday 29 August 2025 20:56:36 +0000 (0:00:00.301) 0:03:54.576 ********* 2025-08-29 21:04:11.429593 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429599 | orchestrator | 2025-08-29 21:04:11.429606 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 21:04:11.429612 | orchestrator | Friday 29 August 2025 20:56:36 +0000 (0:00:00.154) 0:03:54.730 ********* 2025-08-29 21:04:11.429618 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429624 | orchestrator | 2025-08-29 21:04:11.429630 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 21:04:11.429636 | orchestrator | Friday 29 August 2025 20:56:36 +0000 (0:00:00.244) 0:03:54.974 ********* 2025-08-29 21:04:11.429643 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429649 | orchestrator | 2025-08-29 21:04:11.429655 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 21:04:11.429665 | orchestrator | Friday 29 August 2025 20:56:36 +0000 (0:00:00.079) 0:03:55.054 ********* 2025-08-29 21:04:11.429671 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429677 | orchestrator | 2025-08-29 21:04:11.429683 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 21:04:11.429690 | orchestrator | Friday 29 August 2025 20:56:37 +0000 (0:00:00.150) 0:03:55.205 ********* 2025-08-29 21:04:11.429696 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429702 | orchestrator | 2025-08-29 21:04:11.429713 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 21:04:11.429720 | orchestrator | Friday 29 August 2025 20:56:37 +0000 (0:00:00.185) 0:03:55.390 ********* 2025-08-29 21:04:11.429726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:04:11.429732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:04:11.429738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:04:11.429744 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429750 | orchestrator | 2025-08-29 21:04:11.429757 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 21:04:11.429763 | orchestrator | Friday 29 August 2025 20:56:37 +0000 (0:00:00.515) 0:03:55.906 ********* 2025-08-29 21:04:11.429769 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429775 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.429781 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.429788 | orchestrator | 2025-08-29 21:04:11.429797 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 21:04:11.429804 | orchestrator | Friday 29 August 2025 20:56:38 +0000 (0:00:00.418) 0:03:56.324 ********* 2025-08-29 21:04:11.429810 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429816 | orchestrator | 2025-08-29 21:04:11.429822 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 21:04:11.429829 | orchestrator | Friday 29 August 2025 20:56:38 +0000 (0:00:00.199) 0:03:56.523 ********* 2025-08-29 21:04:11.429835 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.429841 | orchestrator | 2025-08-29 21:04:11.429847 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 21:04:11.429853 | orchestrator | Friday 29 August 2025 20:56:38 +0000 (0:00:00.184) 0:03:56.708 ********* 2025-08-29 21:04:11.429859 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.429866 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.429872 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.429878 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.429884 | orchestrator | 2025-08-29 21:04:11.429890 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 21:04:11.429897 | orchestrator | Friday 29 August 2025 20:56:39 +0000 (0:00:00.834) 0:03:57.542 ********* 2025-08-29 21:04:11.429903 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.429909 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.429915 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.429921 | orchestrator | 2025-08-29 21:04:11.429927 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 21:04:11.429934 | orchestrator | Friday 29 August 2025 20:56:39 +0000 (0:00:00.301) 0:03:57.843 ********* 2025-08-29 21:04:11.429940 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.429946 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.429952 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.429958 | orchestrator | 2025-08-29 21:04:11.429964 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 21:04:11.429971 | orchestrator | Friday 29 August 2025 20:56:40 +0000 (0:00:01.121) 0:03:58.965 ********* 2025-08-29 21:04:11.429988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:04:11.429995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:04:11.430008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:04:11.430029 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.430037 | orchestrator | 2025-08-29 21:04:11.430043 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 21:04:11.430049 | orchestrator | Friday 29 August 2025 20:56:41 +0000 (0:00:00.679) 0:03:59.644 ********* 2025-08-29 21:04:11.430056 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.430062 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.430068 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.430075 | orchestrator | 2025-08-29 21:04:11.430081 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 21:04:11.430087 | orchestrator | Friday 29 August 2025 20:56:41 +0000 (0:00:00.255) 0:03:59.900 ********* 2025-08-29 21:04:11.430093 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.430100 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.430106 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.430112 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.430119 | orchestrator | 2025-08-29 21:04:11.430125 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 21:04:11.430131 | orchestrator | Friday 29 August 2025 20:56:42 +0000 (0:00:00.804) 0:04:00.704 ********* 2025-08-29 21:04:11.430137 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.430143 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.430150 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.430156 | orchestrator | 2025-08-29 21:04:11.430162 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 21:04:11.430169 | orchestrator | Friday 29 August 2025 20:56:42 +0000 (0:00:00.263) 0:04:00.967 ********* 2025-08-29 21:04:11.430175 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.430181 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.430187 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.430193 | orchestrator | 2025-08-29 21:04:11.430200 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 21:04:11.430206 | orchestrator | Friday 29 August 2025 20:56:44 +0000 (0:00:01.502) 0:04:02.470 ********* 2025-08-29 21:04:11.430212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:04:11.430218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:04:11.430224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:04:11.430231 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.430237 | orchestrator | 2025-08-29 21:04:11.430243 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 21:04:11.430249 | orchestrator | Friday 29 August 2025 20:56:44 +0000 (0:00:00.556) 0:04:03.027 ********* 2025-08-29 21:04:11.430259 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.430265 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.430271 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.430277 | orchestrator | 2025-08-29 21:04:11.430283 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-08-29 21:04:11.430290 | orchestrator | Friday 29 August 2025 20:56:45 +0000 (0:00:00.286) 0:04:03.313 ********* 2025-08-29 21:04:11.430296 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.430302 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.430308 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.430314 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.430321 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.430327 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.430333 | orchestrator | 2025-08-29 21:04:11.430339 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 21:04:11.430345 | orchestrator | Friday 29 August 2025 20:56:45 +0000 (0:00:00.677) 0:04:03.990 ********* 2025-08-29 21:04:11.430359 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.430366 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.430376 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.430382 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.430389 | orchestrator | 2025-08-29 21:04:11.430395 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 21:04:11.430401 | orchestrator | Friday 29 August 2025 20:56:46 +0000 (0:00:00.705) 0:04:04.696 ********* 2025-08-29 21:04:11.430407 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.430413 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.430420 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.430426 | orchestrator | 2025-08-29 21:04:11.430432 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 21:04:11.430438 | orchestrator | Friday 29 August 2025 20:56:46 +0000 (0:00:00.404) 0:04:05.100 ********* 2025-08-29 21:04:11.430445 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.430451 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.430457 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.430463 | orchestrator | 2025-08-29 21:04:11.430469 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 21:04:11.430476 | orchestrator | Friday 29 August 2025 20:56:48 +0000 (0:00:01.208) 0:04:06.309 ********* 2025-08-29 21:04:11.430482 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 21:04:11.430488 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 21:04:11.430494 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 21:04:11.430501 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.430507 | orchestrator | 2025-08-29 21:04:11.430513 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 21:04:11.430519 | orchestrator | Friday 29 August 2025 20:56:48 +0000 (0:00:00.529) 0:04:06.838 ********* 2025-08-29 21:04:11.430526 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.430532 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.430538 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.430544 | orchestrator | 2025-08-29 21:04:11.430550 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-08-29 21:04:11.430557 | orchestrator | 2025-08-29 21:04:11.430563 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 21:04:11.430569 | orchestrator | Friday 29 August 2025 20:56:49 +0000 (0:00:00.471) 0:04:07.310 ********* 2025-08-29 21:04:11.430575 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.430582 | orchestrator | 2025-08-29 21:04:11.430588 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 21:04:11.430594 | orchestrator | Friday 29 August 2025 20:56:49 +0000 (0:00:00.592) 0:04:07.903 ********* 2025-08-29 21:04:11.430600 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.430607 | orchestrator | 2025-08-29 21:04:11.430613 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 21:04:11.430619 | orchestrator | Friday 29 August 2025 20:56:50 +0000 (0:00:00.463) 0:04:08.366 ********* 2025-08-29 21:04:11.430625 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.430631 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.430638 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.430644 | orchestrator | 2025-08-29 21:04:11.430650 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 21:04:11.430656 | orchestrator | Friday 29 August 2025 20:56:51 +0000 (0:00:00.842) 0:04:09.208 ********* 2025-08-29 21:04:11.430662 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.430668 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.430675 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.430681 | orchestrator | 2025-08-29 21:04:11.430687 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 21:04:11.430697 | orchestrator | Friday 29 August 2025 20:56:51 +0000 (0:00:00.293) 0:04:09.502 ********* 2025-08-29 21:04:11.430704 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.430710 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.430716 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.430722 | orchestrator | 2025-08-29 21:04:11.430728 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 21:04:11.430735 | orchestrator | Friday 29 August 2025 20:56:51 +0000 (0:00:00.280) 0:04:09.783 ********* 2025-08-29 21:04:11.430741 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.430747 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.430753 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.430760 | orchestrator | 2025-08-29 21:04:11.430766 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 21:04:11.430772 | orchestrator | Friday 29 August 2025 20:56:51 +0000 (0:00:00.266) 0:04:10.049 ********* 2025-08-29 21:04:11.430778 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.430785 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.430791 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.430800 | orchestrator | 2025-08-29 21:04:11.430806 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 21:04:11.430813 | orchestrator | Friday 29 August 2025 20:56:52 +0000 (0:00:00.870) 0:04:10.919 ********* 2025-08-29 21:04:11.430819 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.430825 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.430831 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.430837 | orchestrator | 2025-08-29 21:04:11.430844 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 21:04:11.430850 | orchestrator | Friday 29 August 2025 20:56:53 +0000 (0:00:00.281) 0:04:11.201 ********* 2025-08-29 21:04:11.430856 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.430862 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.430869 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.430875 | orchestrator | 2025-08-29 21:04:11.430881 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 21:04:11.430891 | orchestrator | Friday 29 August 2025 20:56:53 +0000 (0:00:00.360) 0:04:11.561 ********* 2025-08-29 21:04:11.430897 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.430904 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.430910 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.430916 | orchestrator | 2025-08-29 21:04:11.430922 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 21:04:11.430928 | orchestrator | Friday 29 August 2025 20:56:54 +0000 (0:00:00.750) 0:04:12.312 ********* 2025-08-29 21:04:11.430935 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.430941 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.430947 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.430953 | orchestrator | 2025-08-29 21:04:11.430959 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 21:04:11.430966 | orchestrator | Friday 29 August 2025 20:56:54 +0000 (0:00:00.671) 0:04:12.984 ********* 2025-08-29 21:04:11.430972 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.430990 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.430997 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.431003 | orchestrator | 2025-08-29 21:04:11.431009 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 21:04:11.431016 | orchestrator | Friday 29 August 2025 20:56:55 +0000 (0:00:00.453) 0:04:13.437 ********* 2025-08-29 21:04:11.431022 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.431028 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.431034 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.431040 | orchestrator | 2025-08-29 21:04:11.431046 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 21:04:11.431053 | orchestrator | Friday 29 August 2025 20:56:55 +0000 (0:00:00.361) 0:04:13.799 ********* 2025-08-29 21:04:11.431063 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.431069 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.431075 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.431081 | orchestrator | 2025-08-29 21:04:11.431088 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 21:04:11.431094 | orchestrator | Friday 29 August 2025 20:56:55 +0000 (0:00:00.260) 0:04:14.060 ********* 2025-08-29 21:04:11.431100 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.431106 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.431112 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.431119 | orchestrator | 2025-08-29 21:04:11.431125 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 21:04:11.431131 | orchestrator | Friday 29 August 2025 20:56:56 +0000 (0:00:00.280) 0:04:14.340 ********* 2025-08-29 21:04:11.431137 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.431143 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.431149 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.431155 | orchestrator | 2025-08-29 21:04:11.431162 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 21:04:11.431168 | orchestrator | Friday 29 August 2025 20:56:56 +0000 (0:00:00.456) 0:04:14.797 ********* 2025-08-29 21:04:11.431174 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.431180 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.431186 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.431193 | orchestrator | 2025-08-29 21:04:11.431199 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 21:04:11.431205 | orchestrator | Friday 29 August 2025 20:56:56 +0000 (0:00:00.290) 0:04:15.088 ********* 2025-08-29 21:04:11.431211 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.431217 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.431223 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.431229 | orchestrator | 2025-08-29 21:04:11.431236 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 21:04:11.431242 | orchestrator | Friday 29 August 2025 20:56:57 +0000 (0:00:00.303) 0:04:15.391 ********* 2025-08-29 21:04:11.431248 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.431254 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.431261 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.431267 | orchestrator | 2025-08-29 21:04:11.431273 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 21:04:11.431279 | orchestrator | Friday 29 August 2025 20:56:57 +0000 (0:00:00.267) 0:04:15.659 ********* 2025-08-29 21:04:11.431285 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.431292 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.431298 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.431304 | orchestrator | 2025-08-29 21:04:11.431310 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 21:04:11.431316 | orchestrator | Friday 29 August 2025 20:56:57 +0000 (0:00:00.432) 0:04:16.092 ********* 2025-08-29 21:04:11.431322 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.431329 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.431335 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.431341 | orchestrator | 2025-08-29 21:04:11.431347 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-08-29 21:04:11.431353 | orchestrator | Friday 29 August 2025 20:56:58 +0000 (0:00:00.467) 0:04:16.559 ********* 2025-08-29 21:04:11.431359 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.431366 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.431372 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.431378 | orchestrator | 2025-08-29 21:04:11.431387 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-08-29 21:04:11.431393 | orchestrator | Friday 29 August 2025 20:56:58 +0000 (0:00:00.302) 0:04:16.862 ********* 2025-08-29 21:04:11.431400 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.431410 | orchestrator | 2025-08-29 21:04:11.431416 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-08-29 21:04:11.431423 | orchestrator | Friday 29 August 2025 20:56:59 +0000 (0:00:00.639) 0:04:17.501 ********* 2025-08-29 21:04:11.431429 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.431435 | orchestrator | 2025-08-29 21:04:11.431441 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-08-29 21:04:11.431447 | orchestrator | Friday 29 August 2025 20:56:59 +0000 (0:00:00.119) 0:04:17.621 ********* 2025-08-29 21:04:11.431454 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-08-29 21:04:11.431460 | orchestrator | 2025-08-29 21:04:11.431470 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-08-29 21:04:11.431476 | orchestrator | Friday 29 August 2025 20:57:00 +0000 (0:00:00.868) 0:04:18.490 ********* 2025-08-29 21:04:11.431482 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.431489 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.431495 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.431501 | orchestrator | 2025-08-29 21:04:11.431507 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-08-29 21:04:11.431513 | orchestrator | Friday 29 August 2025 20:57:00 +0000 (0:00:00.298) 0:04:18.789 ********* 2025-08-29 21:04:11.431520 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.431526 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.431532 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.431538 | orchestrator | 2025-08-29 21:04:11.431544 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-08-29 21:04:11.431551 | orchestrator | Friday 29 August 2025 20:57:01 +0000 (0:00:00.453) 0:04:19.242 ********* 2025-08-29 21:04:11.431557 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.431563 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.431569 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.431575 | orchestrator | 2025-08-29 21:04:11.431582 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-08-29 21:04:11.431588 | orchestrator | Friday 29 August 2025 20:57:02 +0000 (0:00:01.259) 0:04:20.502 ********* 2025-08-29 21:04:11.431594 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.431600 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.431607 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.431613 | orchestrator | 2025-08-29 21:04:11.431619 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-08-29 21:04:11.431625 | orchestrator | Friday 29 August 2025 20:57:03 +0000 (0:00:00.820) 0:04:21.323 ********* 2025-08-29 21:04:11.431632 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.431638 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.431644 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.431650 | orchestrator | 2025-08-29 21:04:11.431656 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-08-29 21:04:11.431663 | orchestrator | Friday 29 August 2025 20:57:03 +0000 (0:00:00.637) 0:04:21.960 ********* 2025-08-29 21:04:11.431669 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.431675 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.431681 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.431687 | orchestrator | 2025-08-29 21:04:11.431694 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-08-29 21:04:11.431700 | orchestrator | Friday 29 August 2025 20:57:04 +0000 (0:00:00.880) 0:04:22.841 ********* 2025-08-29 21:04:11.431706 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.431712 | orchestrator | 2025-08-29 21:04:11.431719 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-08-29 21:04:11.431725 | orchestrator | Friday 29 August 2025 20:57:05 +0000 (0:00:01.163) 0:04:24.005 ********* 2025-08-29 21:04:11.431731 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.431737 | orchestrator | 2025-08-29 21:04:11.431743 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-08-29 21:04:11.431754 | orchestrator | Friday 29 August 2025 20:57:06 +0000 (0:00:00.679) 0:04:24.684 ********* 2025-08-29 21:04:11.431760 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 21:04:11.431766 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.431772 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.431779 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 21:04:11.431785 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 21:04:11.431791 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 21:04:11.431798 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 21:04:11.431804 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-08-29 21:04:11.431810 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 21:04:11.431816 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-08-29 21:04:11.431823 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-08-29 21:04:11.431829 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-08-29 21:04:11.431835 | orchestrator | 2025-08-29 21:04:11.431841 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-08-29 21:04:11.431847 | orchestrator | Friday 29 August 2025 20:57:09 +0000 (0:00:03.188) 0:04:27.872 ********* 2025-08-29 21:04:11.431854 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.431860 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.431867 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.431873 | orchestrator | 2025-08-29 21:04:11.431879 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-08-29 21:04:11.431885 | orchestrator | Friday 29 August 2025 20:57:10 +0000 (0:00:01.229) 0:04:29.102 ********* 2025-08-29 21:04:11.431891 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.431898 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.431904 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.431910 | orchestrator | 2025-08-29 21:04:11.431916 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-08-29 21:04:11.431922 | orchestrator | Friday 29 August 2025 20:57:11 +0000 (0:00:00.427) 0:04:29.530 ********* 2025-08-29 21:04:11.431929 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.431935 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.431941 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.431947 | orchestrator | 2025-08-29 21:04:11.431953 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-08-29 21:04:11.431960 | orchestrator | Friday 29 August 2025 20:57:11 +0000 (0:00:00.266) 0:04:29.796 ********* 2025-08-29 21:04:11.431966 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.431972 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.431989 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.431996 | orchestrator | 2025-08-29 21:04:11.432002 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-08-29 21:04:11.432011 | orchestrator | Friday 29 August 2025 20:57:13 +0000 (0:00:01.623) 0:04:31.420 ********* 2025-08-29 21:04:11.432018 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.432024 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.432030 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.432037 | orchestrator | 2025-08-29 21:04:11.432043 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-08-29 21:04:11.432049 | orchestrator | Friday 29 August 2025 20:57:14 +0000 (0:00:01.418) 0:04:32.838 ********* 2025-08-29 21:04:11.432055 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.432061 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.432068 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.432074 | orchestrator | 2025-08-29 21:04:11.432080 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-08-29 21:04:11.432090 | orchestrator | Friday 29 August 2025 20:57:14 +0000 (0:00:00.291) 0:04:33.130 ********* 2025-08-29 21:04:11.432097 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.432103 | orchestrator | 2025-08-29 21:04:11.432109 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-08-29 21:04:11.432116 | orchestrator | Friday 29 August 2025 20:57:15 +0000 (0:00:00.742) 0:04:33.872 ********* 2025-08-29 21:04:11.432122 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.432128 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.432135 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.432141 | orchestrator | 2025-08-29 21:04:11.432147 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-08-29 21:04:11.432153 | orchestrator | Friday 29 August 2025 20:57:15 +0000 (0:00:00.285) 0:04:34.158 ********* 2025-08-29 21:04:11.432160 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.432166 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.432172 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.432178 | orchestrator | 2025-08-29 21:04:11.432185 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-08-29 21:04:11.432191 | orchestrator | Friday 29 August 2025 20:57:16 +0000 (0:00:00.272) 0:04:34.430 ********* 2025-08-29 21:04:11.432197 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.432204 | orchestrator | 2025-08-29 21:04:11.432210 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-08-29 21:04:11.432216 | orchestrator | Friday 29 August 2025 20:57:16 +0000 (0:00:00.718) 0:04:35.149 ********* 2025-08-29 21:04:11.432222 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.432229 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.432235 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.432241 | orchestrator | 2025-08-29 21:04:11.432247 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-08-29 21:04:11.432253 | orchestrator | Friday 29 August 2025 20:57:18 +0000 (0:00:01.552) 0:04:36.702 ********* 2025-08-29 21:04:11.432260 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.432266 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.432272 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.432279 | orchestrator | 2025-08-29 21:04:11.432285 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-08-29 21:04:11.432291 | orchestrator | Friday 29 August 2025 20:57:19 +0000 (0:00:01.336) 0:04:38.038 ********* 2025-08-29 21:04:11.432297 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.432303 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.432310 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.432316 | orchestrator | 2025-08-29 21:04:11.432322 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-08-29 21:04:11.432328 | orchestrator | Friday 29 August 2025 20:57:23 +0000 (0:00:03.421) 0:04:41.460 ********* 2025-08-29 21:04:11.432335 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.432341 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.432347 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.432353 | orchestrator | 2025-08-29 21:04:11.432400 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-08-29 21:04:11.432413 | orchestrator | Friday 29 August 2025 20:57:25 +0000 (0:00:02.251) 0:04:43.712 ********* 2025-08-29 21:04:11.432420 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.432426 | orchestrator | 2025-08-29 21:04:11.432432 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-08-29 21:04:11.432438 | orchestrator | Friday 29 August 2025 20:57:25 +0000 (0:00:00.459) 0:04:44.171 ********* 2025-08-29 21:04:11.432445 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-08-29 21:04:11.432455 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.432461 | orchestrator | 2025-08-29 21:04:11.432467 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-08-29 21:04:11.432473 | orchestrator | Friday 29 August 2025 20:57:48 +0000 (0:00:22.206) 0:05:06.378 ********* 2025-08-29 21:04:11.432479 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.432489 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.432495 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.432501 | orchestrator | 2025-08-29 21:04:11.432507 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-08-29 21:04:11.432513 | orchestrator | Friday 29 August 2025 20:57:58 +0000 (0:00:10.747) 0:05:17.125 ********* 2025-08-29 21:04:11.432519 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.432526 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.432532 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.432538 | orchestrator | 2025-08-29 21:04:11.432544 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-08-29 21:04:11.432550 | orchestrator | Friday 29 August 2025 20:57:59 +0000 (0:00:00.305) 0:05:17.430 ********* 2025-08-29 21:04:11.432563 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0f79e33c500ea2ac21fd5c87fe71f2bba775f5c7'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-08-29 21:04:11.432571 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0f79e33c500ea2ac21fd5c87fe71f2bba775f5c7'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-08-29 21:04:11.432578 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0f79e33c500ea2ac21fd5c87fe71f2bba775f5c7'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-08-29 21:04:11.432586 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0f79e33c500ea2ac21fd5c87fe71f2bba775f5c7'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-08-29 21:04:11.432593 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0f79e33c500ea2ac21fd5c87fe71f2bba775f5c7'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-08-29 21:04:11.432600 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0f79e33c500ea2ac21fd5c87fe71f2bba775f5c7'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0f79e33c500ea2ac21fd5c87fe71f2bba775f5c7'}])  2025-08-29 21:04:11.432608 | orchestrator | 2025-08-29 21:04:11.432614 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 21:04:11.432621 | orchestrator | Friday 29 August 2025 20:58:14 +0000 (0:00:15.009) 0:05:32.440 ********* 2025-08-29 21:04:11.432627 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.432637 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.432643 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.432649 | orchestrator | 2025-08-29 21:04:11.432656 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 21:04:11.432662 | orchestrator | Friday 29 August 2025 20:58:14 +0000 (0:00:00.333) 0:05:32.774 ********* 2025-08-29 21:04:11.432668 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.432674 | orchestrator | 2025-08-29 21:04:11.432680 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 21:04:11.432687 | orchestrator | Friday 29 August 2025 20:58:15 +0000 (0:00:00.548) 0:05:33.323 ********* 2025-08-29 21:04:11.432693 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.432699 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.432705 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.432711 | orchestrator | 2025-08-29 21:04:11.432718 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 21:04:11.432724 | orchestrator | Friday 29 August 2025 20:58:15 +0000 (0:00:00.570) 0:05:33.894 ********* 2025-08-29 21:04:11.432730 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.432736 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.432742 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.432748 | orchestrator | 2025-08-29 21:04:11.432755 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 21:04:11.432764 | orchestrator | Friday 29 August 2025 20:58:16 +0000 (0:00:00.364) 0:05:34.258 ********* 2025-08-29 21:04:11.432770 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 21:04:11.432776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 21:04:11.432783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 21:04:11.432789 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.432795 | orchestrator | 2025-08-29 21:04:11.432801 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 21:04:11.432807 | orchestrator | Friday 29 August 2025 20:58:16 +0000 (0:00:00.574) 0:05:34.833 ********* 2025-08-29 21:04:11.432813 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.432820 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.432826 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.432832 | orchestrator | 2025-08-29 21:04:11.432838 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-08-29 21:04:11.432844 | orchestrator | 2025-08-29 21:04:11.432851 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 21:04:11.432860 | orchestrator | Friday 29 August 2025 20:58:17 +0000 (0:00:00.517) 0:05:35.351 ********* 2025-08-29 21:04:11.432867 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.432873 | orchestrator | 2025-08-29 21:04:11.432879 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 21:04:11.432885 | orchestrator | Friday 29 August 2025 20:58:17 +0000 (0:00:00.742) 0:05:36.094 ********* 2025-08-29 21:04:11.432892 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.432898 | orchestrator | 2025-08-29 21:04:11.432904 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 21:04:11.432910 | orchestrator | Friday 29 August 2025 20:58:18 +0000 (0:00:00.484) 0:05:36.579 ********* 2025-08-29 21:04:11.432917 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.432923 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.432929 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.432935 | orchestrator | 2025-08-29 21:04:11.432942 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 21:04:11.432948 | orchestrator | Friday 29 August 2025 20:58:19 +0000 (0:00:01.023) 0:05:37.602 ********* 2025-08-29 21:04:11.432958 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.432964 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.432971 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433010 | orchestrator | 2025-08-29 21:04:11.433017 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 21:04:11.433024 | orchestrator | Friday 29 August 2025 20:58:19 +0000 (0:00:00.322) 0:05:37.924 ********* 2025-08-29 21:04:11.433030 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433036 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433043 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433049 | orchestrator | 2025-08-29 21:04:11.433055 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 21:04:11.433062 | orchestrator | Friday 29 August 2025 20:58:20 +0000 (0:00:00.298) 0:05:38.222 ********* 2025-08-29 21:04:11.433068 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433074 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433080 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433087 | orchestrator | 2025-08-29 21:04:11.433093 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 21:04:11.433099 | orchestrator | Friday 29 August 2025 20:58:20 +0000 (0:00:00.292) 0:05:38.515 ********* 2025-08-29 21:04:11.433105 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.433112 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.433118 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.433124 | orchestrator | 2025-08-29 21:04:11.433130 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 21:04:11.433137 | orchestrator | Friday 29 August 2025 20:58:21 +0000 (0:00:00.974) 0:05:39.490 ********* 2025-08-29 21:04:11.433143 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433149 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433156 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433162 | orchestrator | 2025-08-29 21:04:11.433168 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 21:04:11.433175 | orchestrator | Friday 29 August 2025 20:58:21 +0000 (0:00:00.333) 0:05:39.823 ********* 2025-08-29 21:04:11.433180 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433185 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433191 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433196 | orchestrator | 2025-08-29 21:04:11.433202 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 21:04:11.433207 | orchestrator | Friday 29 August 2025 20:58:21 +0000 (0:00:00.304) 0:05:40.128 ********* 2025-08-29 21:04:11.433213 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.433218 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.433224 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.433229 | orchestrator | 2025-08-29 21:04:11.433235 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 21:04:11.433240 | orchestrator | Friday 29 August 2025 20:58:22 +0000 (0:00:00.761) 0:05:40.890 ********* 2025-08-29 21:04:11.433246 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.433251 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.433257 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.433262 | orchestrator | 2025-08-29 21:04:11.433268 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 21:04:11.433273 | orchestrator | Friday 29 August 2025 20:58:23 +0000 (0:00:01.021) 0:05:41.911 ********* 2025-08-29 21:04:11.433279 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433284 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433290 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433295 | orchestrator | 2025-08-29 21:04:11.433301 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 21:04:11.433306 | orchestrator | Friday 29 August 2025 20:58:24 +0000 (0:00:00.305) 0:05:42.217 ********* 2025-08-29 21:04:11.433312 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.433320 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.433329 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.433335 | orchestrator | 2025-08-29 21:04:11.433341 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 21:04:11.433346 | orchestrator | Friday 29 August 2025 20:58:24 +0000 (0:00:00.390) 0:05:42.608 ********* 2025-08-29 21:04:11.433352 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433357 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433363 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433368 | orchestrator | 2025-08-29 21:04:11.433374 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 21:04:11.433379 | orchestrator | Friday 29 August 2025 20:58:24 +0000 (0:00:00.295) 0:05:42.904 ********* 2025-08-29 21:04:11.433384 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433390 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433395 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433401 | orchestrator | 2025-08-29 21:04:11.433410 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 21:04:11.433416 | orchestrator | Friday 29 August 2025 20:58:25 +0000 (0:00:00.669) 0:05:43.573 ********* 2025-08-29 21:04:11.433421 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433427 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433432 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433437 | orchestrator | 2025-08-29 21:04:11.433443 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 21:04:11.433448 | orchestrator | Friday 29 August 2025 20:58:25 +0000 (0:00:00.363) 0:05:43.937 ********* 2025-08-29 21:04:11.433454 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433459 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433465 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433471 | orchestrator | 2025-08-29 21:04:11.433476 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 21:04:11.433482 | orchestrator | Friday 29 August 2025 20:58:26 +0000 (0:00:00.316) 0:05:44.253 ********* 2025-08-29 21:04:11.433487 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433493 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433498 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433503 | orchestrator | 2025-08-29 21:04:11.433509 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 21:04:11.433514 | orchestrator | Friday 29 August 2025 20:58:26 +0000 (0:00:00.282) 0:05:44.536 ********* 2025-08-29 21:04:11.433520 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.433525 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.433531 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.433536 | orchestrator | 2025-08-29 21:04:11.433542 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 21:04:11.433547 | orchestrator | Friday 29 August 2025 20:58:26 +0000 (0:00:00.465) 0:05:45.001 ********* 2025-08-29 21:04:11.433553 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.433558 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.433564 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.433569 | orchestrator | 2025-08-29 21:04:11.433575 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 21:04:11.433580 | orchestrator | Friday 29 August 2025 20:58:27 +0000 (0:00:00.289) 0:05:45.290 ********* 2025-08-29 21:04:11.433586 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.433591 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.433597 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.433602 | orchestrator | 2025-08-29 21:04:11.433608 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-08-29 21:04:11.433613 | orchestrator | Friday 29 August 2025 20:58:27 +0000 (0:00:00.433) 0:05:45.724 ********* 2025-08-29 21:04:11.433619 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 21:04:11.433624 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 21:04:11.433644 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 21:04:11.433650 | orchestrator | 2025-08-29 21:04:11.433655 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-08-29 21:04:11.433661 | orchestrator | Friday 29 August 2025 20:58:28 +0000 (0:00:00.686) 0:05:46.411 ********* 2025-08-29 21:04:11.433666 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.433672 | orchestrator | 2025-08-29 21:04:11.433677 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-08-29 21:04:11.433683 | orchestrator | Friday 29 August 2025 20:58:28 +0000 (0:00:00.574) 0:05:46.986 ********* 2025-08-29 21:04:11.433688 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.433694 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.433699 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.433705 | orchestrator | 2025-08-29 21:04:11.433710 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-08-29 21:04:11.433716 | orchestrator | Friday 29 August 2025 20:58:29 +0000 (0:00:00.651) 0:05:47.637 ********* 2025-08-29 21:04:11.433721 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433727 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433732 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433738 | orchestrator | 2025-08-29 21:04:11.433743 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-08-29 21:04:11.433749 | orchestrator | Friday 29 August 2025 20:58:29 +0000 (0:00:00.267) 0:05:47.905 ********* 2025-08-29 21:04:11.433754 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 21:04:11.433760 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 21:04:11.433765 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 21:04:11.433771 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-08-29 21:04:11.433776 | orchestrator | 2025-08-29 21:04:11.433782 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-08-29 21:04:11.433787 | orchestrator | Friday 29 August 2025 20:58:40 +0000 (0:00:11.218) 0:05:59.123 ********* 2025-08-29 21:04:11.433793 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.433798 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.433804 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.433809 | orchestrator | 2025-08-29 21:04:11.433818 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-08-29 21:04:11.433823 | orchestrator | Friday 29 August 2025 20:58:41 +0000 (0:00:00.607) 0:05:59.731 ********* 2025-08-29 21:04:11.433829 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 21:04:11.433834 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 21:04:11.433840 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 21:04:11.433845 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 21:04:11.433851 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.433856 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.433862 | orchestrator | 2025-08-29 21:04:11.433867 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-08-29 21:04:11.433876 | orchestrator | Friday 29 August 2025 20:58:43 +0000 (0:00:02.297) 0:06:02.029 ********* 2025-08-29 21:04:11.433882 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 21:04:11.433887 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 21:04:11.433892 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 21:04:11.433898 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 21:04:11.433903 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 21:04:11.433908 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 21:04:11.433914 | orchestrator | 2025-08-29 21:04:11.433919 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-08-29 21:04:11.433928 | orchestrator | Friday 29 August 2025 20:58:45 +0000 (0:00:01.208) 0:06:03.237 ********* 2025-08-29 21:04:11.433934 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.433939 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.433945 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.433950 | orchestrator | 2025-08-29 21:04:11.433956 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-08-29 21:04:11.433961 | orchestrator | Friday 29 August 2025 20:58:45 +0000 (0:00:00.658) 0:06:03.895 ********* 2025-08-29 21:04:11.433967 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.433972 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.433988 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.433993 | orchestrator | 2025-08-29 21:04:11.433999 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-08-29 21:04:11.434004 | orchestrator | Friday 29 August 2025 20:58:46 +0000 (0:00:00.426) 0:06:04.322 ********* 2025-08-29 21:04:11.434010 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.434086 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.434092 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.434097 | orchestrator | 2025-08-29 21:04:11.434102 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-08-29 21:04:11.434108 | orchestrator | Friday 29 August 2025 20:58:46 +0000 (0:00:00.261) 0:06:04.583 ********* 2025-08-29 21:04:11.434113 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.434119 | orchestrator | 2025-08-29 21:04:11.434124 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-08-29 21:04:11.434130 | orchestrator | Friday 29 August 2025 20:58:46 +0000 (0:00:00.458) 0:06:05.042 ********* 2025-08-29 21:04:11.434135 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.434140 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.434146 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.434151 | orchestrator | 2025-08-29 21:04:11.434157 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-08-29 21:04:11.434162 | orchestrator | Friday 29 August 2025 20:58:47 +0000 (0:00:00.439) 0:06:05.481 ********* 2025-08-29 21:04:11.434167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.434173 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.434178 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.434184 | orchestrator | 2025-08-29 21:04:11.434189 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-08-29 21:04:11.434195 | orchestrator | Friday 29 August 2025 20:58:47 +0000 (0:00:00.267) 0:06:05.749 ********* 2025-08-29 21:04:11.434200 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.434206 | orchestrator | 2025-08-29 21:04:11.434211 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-08-29 21:04:11.434216 | orchestrator | Friday 29 August 2025 20:58:47 +0000 (0:00:00.430) 0:06:06.180 ********* 2025-08-29 21:04:11.434222 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.434227 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.434233 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.434238 | orchestrator | 2025-08-29 21:04:11.434244 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-08-29 21:04:11.434249 | orchestrator | Friday 29 August 2025 20:58:49 +0000 (0:00:01.466) 0:06:07.646 ********* 2025-08-29 21:04:11.434254 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.434260 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.434265 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.434271 | orchestrator | 2025-08-29 21:04:11.434276 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-08-29 21:04:11.434281 | orchestrator | Friday 29 August 2025 20:58:50 +0000 (0:00:01.203) 0:06:08.850 ********* 2025-08-29 21:04:11.434287 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.434297 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.434303 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.434308 | orchestrator | 2025-08-29 21:04:11.434314 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-08-29 21:04:11.434319 | orchestrator | Friday 29 August 2025 20:58:52 +0000 (0:00:01.805) 0:06:10.656 ********* 2025-08-29 21:04:11.434325 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.434330 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.434335 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.434341 | orchestrator | 2025-08-29 21:04:11.434346 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-08-29 21:04:11.434355 | orchestrator | Friday 29 August 2025 20:58:54 +0000 (0:00:01.822) 0:06:12.478 ********* 2025-08-29 21:04:11.434361 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.434366 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.434372 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-08-29 21:04:11.434377 | orchestrator | 2025-08-29 21:04:11.434382 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-08-29 21:04:11.434388 | orchestrator | Friday 29 August 2025 20:58:54 +0000 (0:00:00.701) 0:06:13.180 ********* 2025-08-29 21:04:11.434393 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-08-29 21:04:11.434399 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-08-29 21:04:11.434422 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-08-29 21:04:11.434429 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-08-29 21:04:11.434435 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-08-29 21:04:11.434440 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:04:11.434446 | orchestrator | 2025-08-29 21:04:11.434451 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-08-29 21:04:11.434457 | orchestrator | Friday 29 August 2025 20:59:25 +0000 (0:00:30.174) 0:06:43.355 ********* 2025-08-29 21:04:11.434462 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:04:11.434467 | orchestrator | 2025-08-29 21:04:11.434473 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-08-29 21:04:11.434478 | orchestrator | Friday 29 August 2025 20:59:26 +0000 (0:00:01.279) 0:06:44.635 ********* 2025-08-29 21:04:11.434483 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.434489 | orchestrator | 2025-08-29 21:04:11.434494 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-08-29 21:04:11.434500 | orchestrator | Friday 29 August 2025 20:59:26 +0000 (0:00:00.307) 0:06:44.942 ********* 2025-08-29 21:04:11.434505 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.434510 | orchestrator | 2025-08-29 21:04:11.434516 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-08-29 21:04:11.434521 | orchestrator | Friday 29 August 2025 20:59:26 +0000 (0:00:00.160) 0:06:45.102 ********* 2025-08-29 21:04:11.434527 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-08-29 21:04:11.434532 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-08-29 21:04:11.434537 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-08-29 21:04:11.434543 | orchestrator | 2025-08-29 21:04:11.434548 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-08-29 21:04:11.434554 | orchestrator | Friday 29 August 2025 20:59:33 +0000 (0:00:06.323) 0:06:51.426 ********* 2025-08-29 21:04:11.434559 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-08-29 21:04:11.434564 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-08-29 21:04:11.434574 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-08-29 21:04:11.434579 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-08-29 21:04:11.434585 | orchestrator | 2025-08-29 21:04:11.434590 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 21:04:11.434596 | orchestrator | Friday 29 August 2025 20:59:38 +0000 (0:00:05.001) 0:06:56.428 ********* 2025-08-29 21:04:11.434601 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.434606 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.434612 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.434617 | orchestrator | 2025-08-29 21:04:11.434622 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 21:04:11.434628 | orchestrator | Friday 29 August 2025 20:59:38 +0000 (0:00:00.651) 0:06:57.080 ********* 2025-08-29 21:04:11.434633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:11.434638 | orchestrator | 2025-08-29 21:04:11.434644 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 21:04:11.434649 | orchestrator | Friday 29 August 2025 20:59:39 +0000 (0:00:00.480) 0:06:57.560 ********* 2025-08-29 21:04:11.434655 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.434660 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.434665 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.434671 | orchestrator | 2025-08-29 21:04:11.434676 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 21:04:11.434681 | orchestrator | Friday 29 August 2025 20:59:39 +0000 (0:00:00.521) 0:06:58.082 ********* 2025-08-29 21:04:11.434687 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.434692 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.434698 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.434703 | orchestrator | 2025-08-29 21:04:11.434708 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 21:04:11.434714 | orchestrator | Friday 29 August 2025 20:59:41 +0000 (0:00:01.130) 0:06:59.213 ********* 2025-08-29 21:04:11.434719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 21:04:11.434724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 21:04:11.434730 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 21:04:11.434735 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.434740 | orchestrator | 2025-08-29 21:04:11.434750 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 21:04:11.434756 | orchestrator | Friday 29 August 2025 20:59:41 +0000 (0:00:00.582) 0:06:59.796 ********* 2025-08-29 21:04:11.434761 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.434767 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.434772 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.434777 | orchestrator | 2025-08-29 21:04:11.434783 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-08-29 21:04:11.434788 | orchestrator | 2025-08-29 21:04:11.434794 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 21:04:11.434799 | orchestrator | Friday 29 August 2025 20:59:42 +0000 (0:00:00.759) 0:07:00.555 ********* 2025-08-29 21:04:11.434804 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.434810 | orchestrator | 2025-08-29 21:04:11.434815 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 21:04:11.434837 | orchestrator | Friday 29 August 2025 20:59:42 +0000 (0:00:00.487) 0:07:01.043 ********* 2025-08-29 21:04:11.434844 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.434849 | orchestrator | 2025-08-29 21:04:11.434855 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 21:04:11.434864 | orchestrator | Friday 29 August 2025 20:59:43 +0000 (0:00:00.685) 0:07:01.728 ********* 2025-08-29 21:04:11.434869 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.434875 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.434880 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.434886 | orchestrator | 2025-08-29 21:04:11.434891 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 21:04:11.434896 | orchestrator | Friday 29 August 2025 20:59:43 +0000 (0:00:00.319) 0:07:02.048 ********* 2025-08-29 21:04:11.434902 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.434907 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.434912 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.434918 | orchestrator | 2025-08-29 21:04:11.434923 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 21:04:11.434928 | orchestrator | Friday 29 August 2025 20:59:44 +0000 (0:00:00.649) 0:07:02.697 ********* 2025-08-29 21:04:11.434934 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.434939 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.434944 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.434950 | orchestrator | 2025-08-29 21:04:11.434955 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 21:04:11.434960 | orchestrator | Friday 29 August 2025 20:59:45 +0000 (0:00:00.830) 0:07:03.527 ********* 2025-08-29 21:04:11.434966 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.434971 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.435004 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.435010 | orchestrator | 2025-08-29 21:04:11.435016 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 21:04:11.435021 | orchestrator | Friday 29 August 2025 20:59:46 +0000 (0:00:00.892) 0:07:04.420 ********* 2025-08-29 21:04:11.435027 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.435032 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.435037 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.435043 | orchestrator | 2025-08-29 21:04:11.435048 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 21:04:11.435053 | orchestrator | Friday 29 August 2025 20:59:46 +0000 (0:00:00.311) 0:07:04.731 ********* 2025-08-29 21:04:11.435059 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.435064 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.435070 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.435075 | orchestrator | 2025-08-29 21:04:11.435081 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 21:04:11.435086 | orchestrator | Friday 29 August 2025 20:59:46 +0000 (0:00:00.301) 0:07:05.032 ********* 2025-08-29 21:04:11.435091 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.435097 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.435102 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.435107 | orchestrator | 2025-08-29 21:04:11.435113 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 21:04:11.435118 | orchestrator | Friday 29 August 2025 20:59:47 +0000 (0:00:00.313) 0:07:05.346 ********* 2025-08-29 21:04:11.435123 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.435129 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.435134 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.435140 | orchestrator | 2025-08-29 21:04:11.435145 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 21:04:11.435150 | orchestrator | Friday 29 August 2025 20:59:48 +0000 (0:00:00.873) 0:07:06.219 ********* 2025-08-29 21:04:11.435156 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.435161 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.435166 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.435171 | orchestrator | 2025-08-29 21:04:11.435176 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 21:04:11.435181 | orchestrator | Friday 29 August 2025 20:59:48 +0000 (0:00:00.709) 0:07:06.928 ********* 2025-08-29 21:04:11.435189 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.435193 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.435198 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.435203 | orchestrator | 2025-08-29 21:04:11.435208 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 21:04:11.435213 | orchestrator | Friday 29 August 2025 20:59:49 +0000 (0:00:00.278) 0:07:07.207 ********* 2025-08-29 21:04:11.435218 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.435222 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.435227 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.435232 | orchestrator | 2025-08-29 21:04:11.435237 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 21:04:11.435242 | orchestrator | Friday 29 August 2025 20:59:49 +0000 (0:00:00.296) 0:07:07.503 ********* 2025-08-29 21:04:11.435246 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.435251 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.435256 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.435261 | orchestrator | 2025-08-29 21:04:11.435268 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 21:04:11.435273 | orchestrator | Friday 29 August 2025 20:59:49 +0000 (0:00:00.581) 0:07:08.085 ********* 2025-08-29 21:04:11.435278 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.435283 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.435288 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.435292 | orchestrator | 2025-08-29 21:04:11.435297 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 21:04:11.435302 | orchestrator | Friday 29 August 2025 20:59:50 +0000 (0:00:00.326) 0:07:08.412 ********* 2025-08-29 21:04:11.435307 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.435311 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.435316 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.435321 | orchestrator | 2025-08-29 21:04:11.435326 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 21:04:11.435331 | orchestrator | Friday 29 August 2025 20:59:50 +0000 (0:00:00.365) 0:07:08.777 ********* 2025-08-29 21:04:11.435338 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.435343 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.435348 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.435353 | orchestrator | 2025-08-29 21:04:11.435358 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 21:04:11.435363 | orchestrator | Friday 29 August 2025 20:59:50 +0000 (0:00:00.304) 0:07:09.082 ********* 2025-08-29 21:04:11.435367 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.435372 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.435377 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.435382 | orchestrator | 2025-08-29 21:04:11.435387 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 21:04:11.435391 | orchestrator | Friday 29 August 2025 20:59:51 +0000 (0:00:00.487) 0:07:09.569 ********* 2025-08-29 21:04:11.435396 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.435401 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.435406 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.435410 | orchestrator | 2025-08-29 21:04:11.435415 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 21:04:11.435420 | orchestrator | Friday 29 August 2025 20:59:51 +0000 (0:00:00.281) 0:07:09.851 ********* 2025-08-29 21:04:11.435425 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.435429 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.435434 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.435439 | orchestrator | 2025-08-29 21:04:11.435444 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 21:04:11.435449 | orchestrator | Friday 29 August 2025 20:59:51 +0000 (0:00:00.327) 0:07:10.178 ********* 2025-08-29 21:04:11.435454 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.435458 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.435466 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.435471 | orchestrator | 2025-08-29 21:04:11.435476 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-08-29 21:04:11.435481 | orchestrator | Friday 29 August 2025 20:59:52 +0000 (0:00:00.510) 0:07:10.689 ********* 2025-08-29 21:04:11.435486 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.435490 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.435495 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.435500 | orchestrator | 2025-08-29 21:04:11.435505 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-08-29 21:04:11.435510 | orchestrator | Friday 29 August 2025 20:59:53 +0000 (0:00:00.582) 0:07:11.272 ********* 2025-08-29 21:04:11.435515 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 21:04:11.435519 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 21:04:11.435524 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 21:04:11.435529 | orchestrator | 2025-08-29 21:04:11.435534 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-08-29 21:04:11.435539 | orchestrator | Friday 29 August 2025 20:59:53 +0000 (0:00:00.621) 0:07:11.893 ********* 2025-08-29 21:04:11.435543 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.435548 | orchestrator | 2025-08-29 21:04:11.435553 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-08-29 21:04:11.435558 | orchestrator | Friday 29 August 2025 20:59:54 +0000 (0:00:00.529) 0:07:12.423 ********* 2025-08-29 21:04:11.435563 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.435567 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.435572 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.435577 | orchestrator | 2025-08-29 21:04:11.435582 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-08-29 21:04:11.435586 | orchestrator | Friday 29 August 2025 20:59:54 +0000 (0:00:00.537) 0:07:12.960 ********* 2025-08-29 21:04:11.435591 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.435596 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.435601 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.435606 | orchestrator | 2025-08-29 21:04:11.435610 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-08-29 21:04:11.435615 | orchestrator | Friday 29 August 2025 20:59:55 +0000 (0:00:00.326) 0:07:13.287 ********* 2025-08-29 21:04:11.435620 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.435625 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.435630 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.435634 | orchestrator | 2025-08-29 21:04:11.435639 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-08-29 21:04:11.435644 | orchestrator | Friday 29 August 2025 20:59:55 +0000 (0:00:00.618) 0:07:13.905 ********* 2025-08-29 21:04:11.435649 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.435654 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.435658 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.435663 | orchestrator | 2025-08-29 21:04:11.435668 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-08-29 21:04:11.435673 | orchestrator | Friday 29 August 2025 20:59:56 +0000 (0:00:00.359) 0:07:14.265 ********* 2025-08-29 21:04:11.435680 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 21:04:11.435685 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 21:04:11.435690 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 21:04:11.435695 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 21:04:11.435703 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 21:04:11.435708 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 21:04:11.435713 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 21:04:11.435721 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 21:04:11.435726 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 21:04:11.435730 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 21:04:11.435735 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 21:04:11.435740 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 21:04:11.435745 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 21:04:11.435750 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 21:04:11.435754 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 21:04:11.435759 | orchestrator | 2025-08-29 21:04:11.435764 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-08-29 21:04:11.435769 | orchestrator | Friday 29 August 2025 20:59:59 +0000 (0:00:03.293) 0:07:17.559 ********* 2025-08-29 21:04:11.435773 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.435778 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.435783 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.435788 | orchestrator | 2025-08-29 21:04:11.435793 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-08-29 21:04:11.435797 | orchestrator | Friday 29 August 2025 20:59:59 +0000 (0:00:00.331) 0:07:17.890 ********* 2025-08-29 21:04:11.435802 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.435807 | orchestrator | 2025-08-29 21:04:11.435812 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-08-29 21:04:11.435817 | orchestrator | Friday 29 August 2025 21:00:00 +0000 (0:00:00.488) 0:07:18.379 ********* 2025-08-29 21:04:11.435821 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 21:04:11.435826 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 21:04:11.435831 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 21:04:11.435836 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-08-29 21:04:11.435841 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-08-29 21:04:11.435846 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-08-29 21:04:11.435850 | orchestrator | 2025-08-29 21:04:11.435855 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-08-29 21:04:11.435860 | orchestrator | Friday 29 August 2025 21:00:01 +0000 (0:00:01.276) 0:07:19.655 ********* 2025-08-29 21:04:11.435865 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.435870 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 21:04:11.435874 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 21:04:11.435879 | orchestrator | 2025-08-29 21:04:11.435884 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-08-29 21:04:11.435889 | orchestrator | Friday 29 August 2025 21:00:03 +0000 (0:00:02.079) 0:07:21.735 ********* 2025-08-29 21:04:11.435894 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 21:04:11.435899 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 21:04:11.435903 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 21:04:11.435908 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.435913 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 21:04:11.435921 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.435926 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 21:04:11.435930 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 21:04:11.435935 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.435940 | orchestrator | 2025-08-29 21:04:11.435945 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-08-29 21:04:11.435949 | orchestrator | Friday 29 August 2025 21:00:04 +0000 (0:00:01.141) 0:07:22.876 ********* 2025-08-29 21:04:11.435954 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:04:11.435959 | orchestrator | 2025-08-29 21:04:11.435964 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-08-29 21:04:11.435968 | orchestrator | Friday 29 August 2025 21:00:06 +0000 (0:00:02.006) 0:07:24.883 ********* 2025-08-29 21:04:11.435973 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.435988 | orchestrator | 2025-08-29 21:04:11.435993 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-08-29 21:04:11.436001 | orchestrator | Friday 29 August 2025 21:00:07 +0000 (0:00:00.454) 0:07:25.337 ********* 2025-08-29 21:04:11.436006 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0', 'data_vg': 'ceph-028c3e14-b13d-554d-9ec8-e0bdecd4a1f0'}) 2025-08-29 21:04:11.436011 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-275f26f1-4e1c-5372-9190-a1521a972d04', 'data_vg': 'ceph-275f26f1-4e1c-5372-9190-a1521a972d04'}) 2025-08-29 21:04:11.436016 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-76a76f98-f10a-56c2-85c8-c111ab4c87c6', 'data_vg': 'ceph-76a76f98-f10a-56c2-85c8-c111ab4c87c6'}) 2025-08-29 21:04:11.436024 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-79476f9b-63cb-5c74-926b-50a3eb682c43', 'data_vg': 'ceph-79476f9b-63cb-5c74-926b-50a3eb682c43'}) 2025-08-29 21:04:11.436029 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f3fee7d3-6bcf-515f-a6c3-caef0862fd99', 'data_vg': 'ceph-f3fee7d3-6bcf-515f-a6c3-caef0862fd99'}) 2025-08-29 21:04:11.436034 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c5db720f-fb16-50b5-adff-95cbe6288183', 'data_vg': 'ceph-c5db720f-fb16-50b5-adff-95cbe6288183'}) 2025-08-29 21:04:11.436039 | orchestrator | 2025-08-29 21:04:11.436044 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-08-29 21:04:11.436048 | orchestrator | Friday 29 August 2025 21:00:52 +0000 (0:00:45.409) 0:08:10.747 ********* 2025-08-29 21:04:11.436053 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436058 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.436063 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.436068 | orchestrator | 2025-08-29 21:04:11.436073 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-08-29 21:04:11.436077 | orchestrator | Friday 29 August 2025 21:00:52 +0000 (0:00:00.311) 0:08:11.059 ********* 2025-08-29 21:04:11.436082 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.436087 | orchestrator | 2025-08-29 21:04:11.436092 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-08-29 21:04:11.436097 | orchestrator | Friday 29 August 2025 21:00:53 +0000 (0:00:00.509) 0:08:11.568 ********* 2025-08-29 21:04:11.436101 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.436106 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.436111 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.436116 | orchestrator | 2025-08-29 21:04:11.436121 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-08-29 21:04:11.436125 | orchestrator | Friday 29 August 2025 21:00:54 +0000 (0:00:00.922) 0:08:12.491 ********* 2025-08-29 21:04:11.436130 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.436138 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.436143 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.436148 | orchestrator | 2025-08-29 21:04:11.436153 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-08-29 21:04:11.436158 | orchestrator | Friday 29 August 2025 21:00:56 +0000 (0:00:02.615) 0:08:15.107 ********* 2025-08-29 21:04:11.436163 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.436168 | orchestrator | 2025-08-29 21:04:11.436172 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-08-29 21:04:11.436177 | orchestrator | Friday 29 August 2025 21:00:57 +0000 (0:00:00.511) 0:08:15.618 ********* 2025-08-29 21:04:11.436182 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.436187 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.436192 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.436196 | orchestrator | 2025-08-29 21:04:11.436201 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-08-29 21:04:11.436206 | orchestrator | Friday 29 August 2025 21:00:58 +0000 (0:00:01.361) 0:08:16.980 ********* 2025-08-29 21:04:11.436211 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.436216 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.436220 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.436225 | orchestrator | 2025-08-29 21:04:11.436230 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-08-29 21:04:11.436235 | orchestrator | Friday 29 August 2025 21:00:59 +0000 (0:00:01.196) 0:08:18.176 ********* 2025-08-29 21:04:11.436240 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.436244 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.436249 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.436254 | orchestrator | 2025-08-29 21:04:11.436259 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-08-29 21:04:11.436264 | orchestrator | Friday 29 August 2025 21:01:01 +0000 (0:00:01.680) 0:08:19.857 ********* 2025-08-29 21:04:11.436268 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436273 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.436278 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.436283 | orchestrator | 2025-08-29 21:04:11.436288 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-08-29 21:04:11.436292 | orchestrator | Friday 29 August 2025 21:01:01 +0000 (0:00:00.320) 0:08:20.178 ********* 2025-08-29 21:04:11.436297 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436302 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.436307 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.436312 | orchestrator | 2025-08-29 21:04:11.436317 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-08-29 21:04:11.436321 | orchestrator | Friday 29 August 2025 21:01:02 +0000 (0:00:00.563) 0:08:20.741 ********* 2025-08-29 21:04:11.436326 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-08-29 21:04:11.436331 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-08-29 21:04:11.436336 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-08-29 21:04:11.436341 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 21:04:11.436348 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-08-29 21:04:11.436353 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-08-29 21:04:11.436358 | orchestrator | 2025-08-29 21:04:11.436362 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-08-29 21:04:11.436367 | orchestrator | Friday 29 August 2025 21:01:03 +0000 (0:00:01.068) 0:08:21.810 ********* 2025-08-29 21:04:11.436372 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-08-29 21:04:11.436377 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-08-29 21:04:11.436382 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-08-29 21:04:11.436386 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-08-29 21:04:11.436391 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-08-29 21:04:11.436399 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-08-29 21:04:11.436404 | orchestrator | 2025-08-29 21:04:11.436409 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-08-29 21:04:11.436417 | orchestrator | Friday 29 August 2025 21:01:05 +0000 (0:00:02.227) 0:08:24.037 ********* 2025-08-29 21:04:11.436422 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-08-29 21:04:11.436426 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-08-29 21:04:11.436431 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-08-29 21:04:11.436436 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-08-29 21:04:11.436441 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-08-29 21:04:11.436445 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-08-29 21:04:11.436450 | orchestrator | 2025-08-29 21:04:11.436455 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-08-29 21:04:11.436460 | orchestrator | Friday 29 August 2025 21:01:09 +0000 (0:00:03.461) 0:08:27.499 ********* 2025-08-29 21:04:11.436464 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436469 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.436474 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:04:11.436479 | orchestrator | 2025-08-29 21:04:11.436484 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-08-29 21:04:11.436488 | orchestrator | Friday 29 August 2025 21:01:13 +0000 (0:00:03.733) 0:08:31.233 ********* 2025-08-29 21:04:11.436493 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436498 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.436503 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-08-29 21:04:11.436508 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:04:11.436513 | orchestrator | 2025-08-29 21:04:11.436517 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-08-29 21:04:11.436522 | orchestrator | Friday 29 August 2025 21:01:25 +0000 (0:00:12.503) 0:08:43.737 ********* 2025-08-29 21:04:11.436527 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436532 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.436537 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.436541 | orchestrator | 2025-08-29 21:04:11.436546 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 21:04:11.436551 | orchestrator | Friday 29 August 2025 21:01:26 +0000 (0:00:01.173) 0:08:44.910 ********* 2025-08-29 21:04:11.436556 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436561 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.436565 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.436570 | orchestrator | 2025-08-29 21:04:11.436575 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 21:04:11.436580 | orchestrator | Friday 29 August 2025 21:01:27 +0000 (0:00:00.376) 0:08:45.287 ********* 2025-08-29 21:04:11.436585 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.436589 | orchestrator | 2025-08-29 21:04:11.436594 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 21:04:11.436599 | orchestrator | Friday 29 August 2025 21:01:27 +0000 (0:00:00.535) 0:08:45.822 ********* 2025-08-29 21:04:11.436604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:04:11.436609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:04:11.436613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:04:11.436618 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436623 | orchestrator | 2025-08-29 21:04:11.436628 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 21:04:11.436633 | orchestrator | Friday 29 August 2025 21:01:28 +0000 (0:00:00.840) 0:08:46.662 ********* 2025-08-29 21:04:11.436637 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436645 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.436650 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.436655 | orchestrator | 2025-08-29 21:04:11.436660 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 21:04:11.436664 | orchestrator | Friday 29 August 2025 21:01:28 +0000 (0:00:00.306) 0:08:46.969 ********* 2025-08-29 21:04:11.436669 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436674 | orchestrator | 2025-08-29 21:04:11.436679 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 21:04:11.436684 | orchestrator | Friday 29 August 2025 21:01:28 +0000 (0:00:00.207) 0:08:47.176 ********* 2025-08-29 21:04:11.436688 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436693 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.436698 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.436703 | orchestrator | 2025-08-29 21:04:11.436708 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 21:04:11.436712 | orchestrator | Friday 29 August 2025 21:01:29 +0000 (0:00:00.326) 0:08:47.502 ********* 2025-08-29 21:04:11.436717 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436722 | orchestrator | 2025-08-29 21:04:11.436727 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 21:04:11.436732 | orchestrator | Friday 29 August 2025 21:01:29 +0000 (0:00:00.268) 0:08:47.771 ********* 2025-08-29 21:04:11.436736 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436741 | orchestrator | 2025-08-29 21:04:11.436748 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 21:04:11.436753 | orchestrator | Friday 29 August 2025 21:01:29 +0000 (0:00:00.235) 0:08:48.006 ********* 2025-08-29 21:04:11.436758 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436763 | orchestrator | 2025-08-29 21:04:11.436768 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 21:04:11.436772 | orchestrator | Friday 29 August 2025 21:01:29 +0000 (0:00:00.141) 0:08:48.148 ********* 2025-08-29 21:04:11.436777 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436782 | orchestrator | 2025-08-29 21:04:11.436787 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 21:04:11.436792 | orchestrator | Friday 29 August 2025 21:01:30 +0000 (0:00:00.213) 0:08:48.361 ********* 2025-08-29 21:04:11.436796 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436801 | orchestrator | 2025-08-29 21:04:11.436808 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 21:04:11.436814 | orchestrator | Friday 29 August 2025 21:01:30 +0000 (0:00:00.774) 0:08:49.136 ********* 2025-08-29 21:04:11.436818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:04:11.436823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:04:11.436828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:04:11.436833 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436838 | orchestrator | 2025-08-29 21:04:11.436843 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 21:04:11.436848 | orchestrator | Friday 29 August 2025 21:01:31 +0000 (0:00:00.410) 0:08:49.547 ********* 2025-08-29 21:04:11.436852 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436857 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.436862 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.436867 | orchestrator | 2025-08-29 21:04:11.436872 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 21:04:11.436877 | orchestrator | Friday 29 August 2025 21:01:31 +0000 (0:00:00.323) 0:08:49.870 ********* 2025-08-29 21:04:11.436881 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436886 | orchestrator | 2025-08-29 21:04:11.436891 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 21:04:11.436896 | orchestrator | Friday 29 August 2025 21:01:31 +0000 (0:00:00.240) 0:08:50.111 ********* 2025-08-29 21:04:11.436904 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.436909 | orchestrator | 2025-08-29 21:04:11.436914 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-08-29 21:04:11.436918 | orchestrator | 2025-08-29 21:04:11.436923 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 21:04:11.436928 | orchestrator | Friday 29 August 2025 21:01:32 +0000 (0:00:00.651) 0:08:50.762 ********* 2025-08-29 21:04:11.436933 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.436938 | orchestrator | 2025-08-29 21:04:11.436943 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 21:04:11.436948 | orchestrator | Friday 29 August 2025 21:01:33 +0000 (0:00:01.254) 0:08:52.017 ********* 2025-08-29 21:04:11.436953 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.436957 | orchestrator | 2025-08-29 21:04:11.436962 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 21:04:11.436967 | orchestrator | Friday 29 August 2025 21:01:35 +0000 (0:00:01.293) 0:08:53.310 ********* 2025-08-29 21:04:11.436972 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.437007 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.437013 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.437017 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.437022 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.437027 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.437032 | orchestrator | 2025-08-29 21:04:11.437036 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 21:04:11.437041 | orchestrator | Friday 29 August 2025 21:01:36 +0000 (0:00:00.993) 0:08:54.303 ********* 2025-08-29 21:04:11.437046 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.437051 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.437056 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.437060 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.437065 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.437070 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.437075 | orchestrator | 2025-08-29 21:04:11.437079 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 21:04:11.437084 | orchestrator | Friday 29 August 2025 21:01:37 +0000 (0:00:00.977) 0:08:55.281 ********* 2025-08-29 21:04:11.437089 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.437094 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.437098 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.437103 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.437108 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.437113 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.437117 | orchestrator | 2025-08-29 21:04:11.437122 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 21:04:11.437127 | orchestrator | Friday 29 August 2025 21:01:38 +0000 (0:00:01.282) 0:08:56.563 ********* 2025-08-29 21:04:11.437132 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.437137 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.437141 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.437146 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.437151 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.437156 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.437161 | orchestrator | 2025-08-29 21:04:11.437165 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 21:04:11.437175 | orchestrator | Friday 29 August 2025 21:01:39 +0000 (0:00:01.053) 0:08:57.616 ********* 2025-08-29 21:04:11.437180 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.437185 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.437189 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.437198 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.437203 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.437208 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.437212 | orchestrator | 2025-08-29 21:04:11.437217 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 21:04:11.437222 | orchestrator | Friday 29 August 2025 21:01:40 +0000 (0:00:01.001) 0:08:58.618 ********* 2025-08-29 21:04:11.437227 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.437232 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.437236 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.437241 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.437246 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.437251 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.437255 | orchestrator | 2025-08-29 21:04:11.437263 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 21:04:11.437268 | orchestrator | Friday 29 August 2025 21:01:40 +0000 (0:00:00.572) 0:08:59.191 ********* 2025-08-29 21:04:11.437273 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.437278 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.437282 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.437287 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.437292 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.437297 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.437301 | orchestrator | 2025-08-29 21:04:11.437306 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 21:04:11.437311 | orchestrator | Friday 29 August 2025 21:01:41 +0000 (0:00:00.855) 0:09:00.046 ********* 2025-08-29 21:04:11.437316 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.437321 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.437325 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.437330 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.437335 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.437340 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.437344 | orchestrator | 2025-08-29 21:04:11.437349 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 21:04:11.437354 | orchestrator | Friday 29 August 2025 21:01:42 +0000 (0:00:00.946) 0:09:00.993 ********* 2025-08-29 21:04:11.437359 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.437364 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.437368 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.437373 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.437378 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.437382 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.437387 | orchestrator | 2025-08-29 21:04:11.437392 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 21:04:11.437397 | orchestrator | Friday 29 August 2025 21:01:43 +0000 (0:00:01.133) 0:09:02.126 ********* 2025-08-29 21:04:11.437402 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.437407 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.437411 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.437416 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.437421 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.437426 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.437430 | orchestrator | 2025-08-29 21:04:11.437435 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 21:04:11.437440 | orchestrator | Friday 29 August 2025 21:01:44 +0000 (0:00:00.508) 0:09:02.634 ********* 2025-08-29 21:04:11.437445 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.437450 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.437454 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.437459 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.437464 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.437469 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.437474 | orchestrator | 2025-08-29 21:04:11.437478 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 21:04:11.437486 | orchestrator | Friday 29 August 2025 21:01:45 +0000 (0:00:00.715) 0:09:03.350 ********* 2025-08-29 21:04:11.437491 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.437496 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.437500 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.437505 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.437510 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.437515 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.437520 | orchestrator | 2025-08-29 21:04:11.437525 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 21:04:11.437529 | orchestrator | Friday 29 August 2025 21:01:45 +0000 (0:00:00.536) 0:09:03.886 ********* 2025-08-29 21:04:11.437534 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.437539 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.437544 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.437549 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.437553 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.437558 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.437563 | orchestrator | 2025-08-29 21:04:11.437568 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 21:04:11.437573 | orchestrator | Friday 29 August 2025 21:01:46 +0000 (0:00:00.745) 0:09:04.632 ********* 2025-08-29 21:04:11.437577 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.437582 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.437587 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.437592 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.437596 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.437601 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.437606 | orchestrator | 2025-08-29 21:04:11.437611 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 21:04:11.437616 | orchestrator | Friday 29 August 2025 21:01:47 +0000 (0:00:00.630) 0:09:05.262 ********* 2025-08-29 21:04:11.437620 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.437625 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.437630 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.437635 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.437639 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.437644 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.437649 | orchestrator | 2025-08-29 21:04:11.437654 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 21:04:11.437661 | orchestrator | Friday 29 August 2025 21:01:47 +0000 (0:00:00.882) 0:09:06.145 ********* 2025-08-29 21:04:11.437666 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:11.437671 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:11.437676 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:11.437680 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.437685 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.437690 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.437695 | orchestrator | 2025-08-29 21:04:11.437699 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 21:04:11.437704 | orchestrator | Friday 29 August 2025 21:01:48 +0000 (0:00:00.608) 0:09:06.753 ********* 2025-08-29 21:04:11.437709 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.437714 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.437719 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.437723 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.437728 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.437733 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.437738 | orchestrator | 2025-08-29 21:04:11.437745 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 21:04:11.437750 | orchestrator | Friday 29 August 2025 21:01:49 +0000 (0:00:00.871) 0:09:07.625 ********* 2025-08-29 21:04:11.437755 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.437763 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.437768 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.437773 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.437778 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.437783 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.437787 | orchestrator | 2025-08-29 21:04:11.437792 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 21:04:11.437797 | orchestrator | Friday 29 August 2025 21:01:50 +0000 (0:00:00.599) 0:09:08.224 ********* 2025-08-29 21:04:11.437802 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.437806 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.437811 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.437816 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.437820 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.437825 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.437830 | orchestrator | 2025-08-29 21:04:11.437835 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-08-29 21:04:11.437840 | orchestrator | Friday 29 August 2025 21:01:51 +0000 (0:00:01.044) 0:09:09.268 ********* 2025-08-29 21:04:11.437844 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.437849 | orchestrator | 2025-08-29 21:04:11.437854 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-08-29 21:04:11.437859 | orchestrator | Friday 29 August 2025 21:01:55 +0000 (0:00:04.082) 0:09:13.350 ********* 2025-08-29 21:04:11.437864 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.437868 | orchestrator | 2025-08-29 21:04:11.437873 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-08-29 21:04:11.437878 | orchestrator | Friday 29 August 2025 21:01:57 +0000 (0:00:02.372) 0:09:15.723 ********* 2025-08-29 21:04:11.437883 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.437888 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.437892 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.437897 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.437902 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.437907 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.437911 | orchestrator | 2025-08-29 21:04:11.437916 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-08-29 21:04:11.437921 | orchestrator | Friday 29 August 2025 21:01:59 +0000 (0:00:01.759) 0:09:17.482 ********* 2025-08-29 21:04:11.437926 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.437930 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.437935 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.437940 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.437945 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.437949 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.437954 | orchestrator | 2025-08-29 21:04:11.437959 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-08-29 21:04:11.437964 | orchestrator | Friday 29 August 2025 21:02:00 +0000 (0:00:01.016) 0:09:18.499 ********* 2025-08-29 21:04:11.437969 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.437974 | orchestrator | 2025-08-29 21:04:11.437992 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-08-29 21:04:11.437996 | orchestrator | Friday 29 August 2025 21:02:01 +0000 (0:00:00.998) 0:09:19.497 ********* 2025-08-29 21:04:11.438001 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.438006 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.438011 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.438031 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.438036 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.438041 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.438046 | orchestrator | 2025-08-29 21:04:11.438050 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-08-29 21:04:11.438055 | orchestrator | Friday 29 August 2025 21:02:02 +0000 (0:00:01.598) 0:09:21.095 ********* 2025-08-29 21:04:11.438064 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.438069 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.438074 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.438079 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.438083 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.438088 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.438093 | orchestrator | 2025-08-29 21:04:11.438098 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-08-29 21:04:11.438103 | orchestrator | Friday 29 August 2025 21:02:06 +0000 (0:00:03.288) 0:09:24.384 ********* 2025-08-29 21:04:11.438108 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.438112 | orchestrator | 2025-08-29 21:04:11.438117 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-08-29 21:04:11.438125 | orchestrator | Friday 29 August 2025 21:02:07 +0000 (0:00:01.088) 0:09:25.473 ********* 2025-08-29 21:04:11.438130 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.438135 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.438139 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.438144 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438149 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438154 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438158 | orchestrator | 2025-08-29 21:04:11.438163 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-08-29 21:04:11.438168 | orchestrator | Friday 29 August 2025 21:02:08 +0000 (0:00:00.831) 0:09:26.305 ********* 2025-08-29 21:04:11.438173 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:11.438178 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.438182 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:11.438187 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.438192 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.438197 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:11.438201 | orchestrator | 2025-08-29 21:04:11.438209 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-08-29 21:04:11.438214 | orchestrator | Friday 29 August 2025 21:02:10 +0000 (0:00:02.416) 0:09:28.721 ********* 2025-08-29 21:04:11.438219 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:11.438223 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:11.438228 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:11.438233 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438238 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438242 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438247 | orchestrator | 2025-08-29 21:04:11.438252 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-08-29 21:04:11.438257 | orchestrator | 2025-08-29 21:04:11.438261 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 21:04:11.438266 | orchestrator | Friday 29 August 2025 21:02:11 +0000 (0:00:00.665) 0:09:29.386 ********* 2025-08-29 21:04:11.438271 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.438276 | orchestrator | 2025-08-29 21:04:11.438281 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 21:04:11.438286 | orchestrator | Friday 29 August 2025 21:02:11 +0000 (0:00:00.730) 0:09:30.117 ********* 2025-08-29 21:04:11.438291 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.438295 | orchestrator | 2025-08-29 21:04:11.438300 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 21:04:11.438305 | orchestrator | Friday 29 August 2025 21:02:12 +0000 (0:00:00.513) 0:09:30.631 ********* 2025-08-29 21:04:11.438310 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.438318 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.438323 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.438328 | orchestrator | 2025-08-29 21:04:11.438332 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 21:04:11.438337 | orchestrator | Friday 29 August 2025 21:02:12 +0000 (0:00:00.273) 0:09:30.904 ********* 2025-08-29 21:04:11.438342 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438347 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438352 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438356 | orchestrator | 2025-08-29 21:04:11.438361 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 21:04:11.438366 | orchestrator | Friday 29 August 2025 21:02:13 +0000 (0:00:00.853) 0:09:31.758 ********* 2025-08-29 21:04:11.438371 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438375 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438380 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438385 | orchestrator | 2025-08-29 21:04:11.438390 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 21:04:11.438394 | orchestrator | Friday 29 August 2025 21:02:14 +0000 (0:00:00.805) 0:09:32.563 ********* 2025-08-29 21:04:11.438399 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438404 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438409 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438413 | orchestrator | 2025-08-29 21:04:11.438418 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 21:04:11.438423 | orchestrator | Friday 29 August 2025 21:02:15 +0000 (0:00:00.759) 0:09:33.322 ********* 2025-08-29 21:04:11.438428 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.438433 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.438438 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.438442 | orchestrator | 2025-08-29 21:04:11.438447 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 21:04:11.438452 | orchestrator | Friday 29 August 2025 21:02:15 +0000 (0:00:00.298) 0:09:33.621 ********* 2025-08-29 21:04:11.438457 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.438462 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.438466 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.438471 | orchestrator | 2025-08-29 21:04:11.438476 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 21:04:11.438481 | orchestrator | Friday 29 August 2025 21:02:15 +0000 (0:00:00.576) 0:09:34.197 ********* 2025-08-29 21:04:11.438486 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.438490 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.438495 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.438500 | orchestrator | 2025-08-29 21:04:11.438505 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 21:04:11.438510 | orchestrator | Friday 29 August 2025 21:02:16 +0000 (0:00:00.305) 0:09:34.503 ********* 2025-08-29 21:04:11.438514 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438519 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438524 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438529 | orchestrator | 2025-08-29 21:04:11.438533 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 21:04:11.438538 | orchestrator | Friday 29 August 2025 21:02:17 +0000 (0:00:00.743) 0:09:35.246 ********* 2025-08-29 21:04:11.438543 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438548 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438553 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438557 | orchestrator | 2025-08-29 21:04:11.438565 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 21:04:11.438570 | orchestrator | Friday 29 August 2025 21:02:17 +0000 (0:00:00.680) 0:09:35.927 ********* 2025-08-29 21:04:11.438575 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.438579 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.438584 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.438592 | orchestrator | 2025-08-29 21:04:11.438597 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 21:04:11.438602 | orchestrator | Friday 29 August 2025 21:02:18 +0000 (0:00:00.549) 0:09:36.476 ********* 2025-08-29 21:04:11.438607 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.438611 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.438616 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.438621 | orchestrator | 2025-08-29 21:04:11.438626 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 21:04:11.438630 | orchestrator | Friday 29 August 2025 21:02:18 +0000 (0:00:00.383) 0:09:36.860 ********* 2025-08-29 21:04:11.438637 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438642 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438647 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438652 | orchestrator | 2025-08-29 21:04:11.438657 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 21:04:11.438661 | orchestrator | Friday 29 August 2025 21:02:19 +0000 (0:00:00.399) 0:09:37.259 ********* 2025-08-29 21:04:11.438666 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438671 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438676 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438681 | orchestrator | 2025-08-29 21:04:11.438685 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 21:04:11.438690 | orchestrator | Friday 29 August 2025 21:02:19 +0000 (0:00:00.372) 0:09:37.632 ********* 2025-08-29 21:04:11.438695 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438700 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438705 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438709 | orchestrator | 2025-08-29 21:04:11.438714 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 21:04:11.438719 | orchestrator | Friday 29 August 2025 21:02:20 +0000 (0:00:00.618) 0:09:38.250 ********* 2025-08-29 21:04:11.438724 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.438729 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.438733 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.438738 | orchestrator | 2025-08-29 21:04:11.438743 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 21:04:11.438748 | orchestrator | Friday 29 August 2025 21:02:20 +0000 (0:00:00.322) 0:09:38.573 ********* 2025-08-29 21:04:11.438753 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.438757 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.438762 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.438767 | orchestrator | 2025-08-29 21:04:11.438772 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 21:04:11.438776 | orchestrator | Friday 29 August 2025 21:02:20 +0000 (0:00:00.415) 0:09:38.989 ********* 2025-08-29 21:04:11.438781 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.438786 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.438791 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.438795 | orchestrator | 2025-08-29 21:04:11.438800 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 21:04:11.438805 | orchestrator | Friday 29 August 2025 21:02:21 +0000 (0:00:00.314) 0:09:39.304 ********* 2025-08-29 21:04:11.438810 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438815 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438819 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438824 | orchestrator | 2025-08-29 21:04:11.438829 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 21:04:11.438834 | orchestrator | Friday 29 August 2025 21:02:21 +0000 (0:00:00.669) 0:09:39.973 ********* 2025-08-29 21:04:11.438839 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.438843 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.438848 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.438853 | orchestrator | 2025-08-29 21:04:11.438858 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-08-29 21:04:11.438866 | orchestrator | Friday 29 August 2025 21:02:22 +0000 (0:00:00.558) 0:09:40.532 ********* 2025-08-29 21:04:11.438871 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.438876 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.438881 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-08-29 21:04:11.438885 | orchestrator | 2025-08-29 21:04:11.438890 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-08-29 21:04:11.438895 | orchestrator | Friday 29 August 2025 21:02:22 +0000 (0:00:00.576) 0:09:41.109 ********* 2025-08-29 21:04:11.438900 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:04:11.438904 | orchestrator | 2025-08-29 21:04:11.438909 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-08-29 21:04:11.438914 | orchestrator | Friday 29 August 2025 21:02:25 +0000 (0:00:02.118) 0:09:43.227 ********* 2025-08-29 21:04:11.438920 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-08-29 21:04:11.438925 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.438930 | orchestrator | 2025-08-29 21:04:11.438935 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-08-29 21:04:11.438940 | orchestrator | Friday 29 August 2025 21:02:25 +0000 (0:00:00.219) 0:09:43.446 ********* 2025-08-29 21:04:11.438946 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 21:04:11.438959 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 21:04:11.438964 | orchestrator | 2025-08-29 21:04:11.438969 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-08-29 21:04:11.438974 | orchestrator | Friday 29 August 2025 21:02:33 +0000 (0:00:08.375) 0:09:51.822 ********* 2025-08-29 21:04:11.439006 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:04:11.439011 | orchestrator | 2025-08-29 21:04:11.439015 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-08-29 21:04:11.439020 | orchestrator | Friday 29 August 2025 21:02:37 +0000 (0:00:03.737) 0:09:55.560 ********* 2025-08-29 21:04:11.439028 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.439033 | orchestrator | 2025-08-29 21:04:11.439038 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-08-29 21:04:11.439043 | orchestrator | Friday 29 August 2025 21:02:37 +0000 (0:00:00.500) 0:09:56.060 ********* 2025-08-29 21:04:11.439048 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 21:04:11.439052 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 21:04:11.439057 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 21:04:11.439062 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-08-29 21:04:11.439067 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-08-29 21:04:11.439071 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-08-29 21:04:11.439076 | orchestrator | 2025-08-29 21:04:11.439081 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-08-29 21:04:11.439086 | orchestrator | Friday 29 August 2025 21:02:39 +0000 (0:00:01.311) 0:09:57.371 ********* 2025-08-29 21:04:11.439091 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.439099 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 21:04:11.439104 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 21:04:11.439109 | orchestrator | 2025-08-29 21:04:11.439114 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-08-29 21:04:11.439118 | orchestrator | Friday 29 August 2025 21:02:41 +0000 (0:00:02.376) 0:09:59.748 ********* 2025-08-29 21:04:11.439123 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 21:04:11.439127 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 21:04:11.439132 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.439137 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 21:04:11.439141 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 21:04:11.439146 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 21:04:11.439150 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.439155 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 21:04:11.439159 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.439164 | orchestrator | 2025-08-29 21:04:11.439168 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-08-29 21:04:11.439173 | orchestrator | Friday 29 August 2025 21:02:42 +0000 (0:00:01.265) 0:10:01.013 ********* 2025-08-29 21:04:11.439177 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.439182 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.439186 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.439191 | orchestrator | 2025-08-29 21:04:11.439195 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-08-29 21:04:11.439200 | orchestrator | Friday 29 August 2025 21:02:45 +0000 (0:00:02.988) 0:10:04.002 ********* 2025-08-29 21:04:11.439204 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.439209 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.439213 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.439218 | orchestrator | 2025-08-29 21:04:11.439223 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-08-29 21:04:11.439227 | orchestrator | Friday 29 August 2025 21:02:46 +0000 (0:00:00.537) 0:10:04.540 ********* 2025-08-29 21:04:11.439232 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.439236 | orchestrator | 2025-08-29 21:04:11.439241 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-08-29 21:04:11.439245 | orchestrator | Friday 29 August 2025 21:02:46 +0000 (0:00:00.462) 0:10:05.003 ********* 2025-08-29 21:04:11.439250 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.439254 | orchestrator | 2025-08-29 21:04:11.439259 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-08-29 21:04:11.439263 | orchestrator | Friday 29 August 2025 21:02:47 +0000 (0:00:00.574) 0:10:05.577 ********* 2025-08-29 21:04:11.439268 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.439272 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.439277 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.439282 | orchestrator | 2025-08-29 21:04:11.439286 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-08-29 21:04:11.439291 | orchestrator | Friday 29 August 2025 21:02:48 +0000 (0:00:01.206) 0:10:06.784 ********* 2025-08-29 21:04:11.439295 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.439300 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.439304 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.439309 | orchestrator | 2025-08-29 21:04:11.439313 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-08-29 21:04:11.439320 | orchestrator | Friday 29 August 2025 21:02:49 +0000 (0:00:01.166) 0:10:07.950 ********* 2025-08-29 21:04:11.439325 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.439333 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.439337 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.439342 | orchestrator | 2025-08-29 21:04:11.439346 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-08-29 21:04:11.439351 | orchestrator | Friday 29 August 2025 21:02:51 +0000 (0:00:01.735) 0:10:09.685 ********* 2025-08-29 21:04:11.439355 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.439360 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.439364 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.439369 | orchestrator | 2025-08-29 21:04:11.439373 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-08-29 21:04:11.439378 | orchestrator | Friday 29 August 2025 21:02:53 +0000 (0:00:02.299) 0:10:11.985 ********* 2025-08-29 21:04:11.439382 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.439389 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.439394 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.439398 | orchestrator | 2025-08-29 21:04:11.439403 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 21:04:11.439407 | orchestrator | Friday 29 August 2025 21:02:54 +0000 (0:00:01.123) 0:10:13.108 ********* 2025-08-29 21:04:11.439412 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.439416 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.439421 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.439425 | orchestrator | 2025-08-29 21:04:11.439430 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 21:04:11.439435 | orchestrator | Friday 29 August 2025 21:02:55 +0000 (0:00:00.842) 0:10:13.950 ********* 2025-08-29 21:04:11.439439 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.439444 | orchestrator | 2025-08-29 21:04:11.439448 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 21:04:11.439453 | orchestrator | Friday 29 August 2025 21:02:56 +0000 (0:00:00.445) 0:10:14.396 ********* 2025-08-29 21:04:11.439457 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.439462 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.439466 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.439471 | orchestrator | 2025-08-29 21:04:11.439475 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 21:04:11.439480 | orchestrator | Friday 29 August 2025 21:02:56 +0000 (0:00:00.282) 0:10:14.679 ********* 2025-08-29 21:04:11.439485 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.439489 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.439494 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.439498 | orchestrator | 2025-08-29 21:04:11.439503 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 21:04:11.439507 | orchestrator | Friday 29 August 2025 21:02:57 +0000 (0:00:01.217) 0:10:15.896 ********* 2025-08-29 21:04:11.439512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:04:11.439516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:04:11.439521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:04:11.439526 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.439530 | orchestrator | 2025-08-29 21:04:11.439535 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 21:04:11.439539 | orchestrator | Friday 29 August 2025 21:02:58 +0000 (0:00:00.986) 0:10:16.883 ********* 2025-08-29 21:04:11.439544 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.439548 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.439553 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.439557 | orchestrator | 2025-08-29 21:04:11.439562 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 21:04:11.439566 | orchestrator | 2025-08-29 21:04:11.439571 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 21:04:11.439576 | orchestrator | Friday 29 August 2025 21:02:59 +0000 (0:00:00.541) 0:10:17.425 ********* 2025-08-29 21:04:11.439585 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.439589 | orchestrator | 2025-08-29 21:04:11.439594 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 21:04:11.439599 | orchestrator | Friday 29 August 2025 21:02:59 +0000 (0:00:00.686) 0:10:18.111 ********* 2025-08-29 21:04:11.439603 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.439608 | orchestrator | 2025-08-29 21:04:11.439612 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 21:04:11.439617 | orchestrator | Friday 29 August 2025 21:03:00 +0000 (0:00:00.518) 0:10:18.630 ********* 2025-08-29 21:04:11.439621 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.439626 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.439630 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.439635 | orchestrator | 2025-08-29 21:04:11.439640 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 21:04:11.439644 | orchestrator | Friday 29 August 2025 21:03:00 +0000 (0:00:00.319) 0:10:18.950 ********* 2025-08-29 21:04:11.439649 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.439653 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.439658 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.439662 | orchestrator | 2025-08-29 21:04:11.439667 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 21:04:11.439671 | orchestrator | Friday 29 August 2025 21:03:01 +0000 (0:00:01.030) 0:10:19.980 ********* 2025-08-29 21:04:11.439676 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.439680 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.439685 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.439689 | orchestrator | 2025-08-29 21:04:11.439694 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 21:04:11.439698 | orchestrator | Friday 29 August 2025 21:03:02 +0000 (0:00:00.796) 0:10:20.776 ********* 2025-08-29 21:04:11.439705 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.439710 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.439714 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.439719 | orchestrator | 2025-08-29 21:04:11.439723 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 21:04:11.439728 | orchestrator | Friday 29 August 2025 21:03:03 +0000 (0:00:00.744) 0:10:21.521 ********* 2025-08-29 21:04:11.439733 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.439737 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.439742 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.439746 | orchestrator | 2025-08-29 21:04:11.439751 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 21:04:11.439755 | orchestrator | Friday 29 August 2025 21:03:03 +0000 (0:00:00.287) 0:10:21.808 ********* 2025-08-29 21:04:11.439760 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.439764 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.439769 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.439774 | orchestrator | 2025-08-29 21:04:11.439780 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 21:04:11.439785 | orchestrator | Friday 29 August 2025 21:03:04 +0000 (0:00:00.537) 0:10:22.346 ********* 2025-08-29 21:04:11.439789 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.439794 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.439798 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.439803 | orchestrator | 2025-08-29 21:04:11.439807 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 21:04:11.439812 | orchestrator | Friday 29 August 2025 21:03:04 +0000 (0:00:00.317) 0:10:22.663 ********* 2025-08-29 21:04:11.439817 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.439821 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.439829 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.439833 | orchestrator | 2025-08-29 21:04:11.439838 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 21:04:11.439842 | orchestrator | Friday 29 August 2025 21:03:05 +0000 (0:00:00.744) 0:10:23.408 ********* 2025-08-29 21:04:11.439847 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.439851 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.439856 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.439860 | orchestrator | 2025-08-29 21:04:11.439865 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 21:04:11.439869 | orchestrator | Friday 29 August 2025 21:03:05 +0000 (0:00:00.733) 0:10:24.141 ********* 2025-08-29 21:04:11.439874 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.439879 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.439883 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.439888 | orchestrator | 2025-08-29 21:04:11.439892 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 21:04:11.439897 | orchestrator | Friday 29 August 2025 21:03:06 +0000 (0:00:00.529) 0:10:24.670 ********* 2025-08-29 21:04:11.439901 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.439906 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.439910 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.439915 | orchestrator | 2025-08-29 21:04:11.439920 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 21:04:11.439924 | orchestrator | Friday 29 August 2025 21:03:06 +0000 (0:00:00.297) 0:10:24.967 ********* 2025-08-29 21:04:11.439929 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.439933 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.439938 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.439942 | orchestrator | 2025-08-29 21:04:11.439947 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 21:04:11.439951 | orchestrator | Friday 29 August 2025 21:03:07 +0000 (0:00:00.335) 0:10:25.303 ********* 2025-08-29 21:04:11.439956 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.439960 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.439965 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.439969 | orchestrator | 2025-08-29 21:04:11.439974 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 21:04:11.439990 | orchestrator | Friday 29 August 2025 21:03:07 +0000 (0:00:00.320) 0:10:25.624 ********* 2025-08-29 21:04:11.439995 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.439999 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.440004 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.440008 | orchestrator | 2025-08-29 21:04:11.440013 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 21:04:11.440017 | orchestrator | Friday 29 August 2025 21:03:07 +0000 (0:00:00.547) 0:10:26.171 ********* 2025-08-29 21:04:11.440022 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.440027 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.440031 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.440036 | orchestrator | 2025-08-29 21:04:11.440040 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 21:04:11.440045 | orchestrator | Friday 29 August 2025 21:03:08 +0000 (0:00:00.321) 0:10:26.493 ********* 2025-08-29 21:04:11.440049 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.440054 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.440058 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.440063 | orchestrator | 2025-08-29 21:04:11.440067 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 21:04:11.440072 | orchestrator | Friday 29 August 2025 21:03:08 +0000 (0:00:00.315) 0:10:26.808 ********* 2025-08-29 21:04:11.440076 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.440081 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.440086 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.440090 | orchestrator | 2025-08-29 21:04:11.440097 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 21:04:11.440102 | orchestrator | Friday 29 August 2025 21:03:08 +0000 (0:00:00.338) 0:10:27.147 ********* 2025-08-29 21:04:11.440107 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.440111 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.440115 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.440120 | orchestrator | 2025-08-29 21:04:11.440124 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 21:04:11.440129 | orchestrator | Friday 29 August 2025 21:03:09 +0000 (0:00:00.644) 0:10:27.791 ********* 2025-08-29 21:04:11.440133 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.440138 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.440145 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.440150 | orchestrator | 2025-08-29 21:04:11.440154 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-08-29 21:04:11.440159 | orchestrator | Friday 29 August 2025 21:03:10 +0000 (0:00:00.543) 0:10:28.335 ********* 2025-08-29 21:04:11.440163 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.440168 | orchestrator | 2025-08-29 21:04:11.440172 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 21:04:11.440177 | orchestrator | Friday 29 August 2025 21:03:10 +0000 (0:00:00.746) 0:10:29.082 ********* 2025-08-29 21:04:11.440181 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.440186 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 21:04:11.440191 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 21:04:11.440195 | orchestrator | 2025-08-29 21:04:11.440202 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 21:04:11.440207 | orchestrator | Friday 29 August 2025 21:03:13 +0000 (0:00:02.401) 0:10:31.483 ********* 2025-08-29 21:04:11.440211 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 21:04:11.440216 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 21:04:11.440220 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.440225 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 21:04:11.440229 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 21:04:11.440234 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.440238 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 21:04:11.440243 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 21:04:11.440247 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.440252 | orchestrator | 2025-08-29 21:04:11.440257 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-08-29 21:04:11.440261 | orchestrator | Friday 29 August 2025 21:03:14 +0000 (0:00:01.237) 0:10:32.721 ********* 2025-08-29 21:04:11.440266 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.440270 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.440275 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.440279 | orchestrator | 2025-08-29 21:04:11.440284 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-08-29 21:04:11.440288 | orchestrator | Friday 29 August 2025 21:03:14 +0000 (0:00:00.318) 0:10:33.039 ********* 2025-08-29 21:04:11.440293 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.440297 | orchestrator | 2025-08-29 21:04:11.440302 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-08-29 21:04:11.440306 | orchestrator | Friday 29 August 2025 21:03:15 +0000 (0:00:00.841) 0:10:33.881 ********* 2025-08-29 21:04:11.440311 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.440316 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.440323 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.440328 | orchestrator | 2025-08-29 21:04:11.440333 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-08-29 21:04:11.440337 | orchestrator | Friday 29 August 2025 21:03:16 +0000 (0:00:00.885) 0:10:34.766 ********* 2025-08-29 21:04:11.440342 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.440346 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 21:04:11.440351 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.440355 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 21:04:11.440360 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.440365 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 21:04:11.440369 | orchestrator | 2025-08-29 21:04:11.440374 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 21:04:11.440378 | orchestrator | Friday 29 August 2025 21:03:21 +0000 (0:00:05.027) 0:10:39.794 ********* 2025-08-29 21:04:11.440383 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.440387 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 21:04:11.440392 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.440396 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 21:04:11.440401 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:04:11.440405 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 21:04:11.440410 | orchestrator | 2025-08-29 21:04:11.440415 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 21:04:11.440422 | orchestrator | Friday 29 August 2025 21:03:23 +0000 (0:00:02.374) 0:10:42.168 ********* 2025-08-29 21:04:11.440426 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 21:04:11.440431 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.440436 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 21:04:11.440440 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.440445 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 21:04:11.440449 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.440454 | orchestrator | 2025-08-29 21:04:11.440458 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-08-29 21:04:11.440463 | orchestrator | Friday 29 August 2025 21:03:25 +0000 (0:00:01.674) 0:10:43.843 ********* 2025-08-29 21:04:11.440468 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-08-29 21:04:11.440472 | orchestrator | 2025-08-29 21:04:11.440477 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-08-29 21:04:11.440484 | orchestrator | Friday 29 August 2025 21:03:25 +0000 (0:00:00.233) 0:10:44.077 ********* 2025-08-29 21:04:11.440488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 21:04:11.440493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 21:04:11.440498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 21:04:11.440505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 21:04:11.440510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 21:04:11.440515 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.440519 | orchestrator | 2025-08-29 21:04:11.440524 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-08-29 21:04:11.440529 | orchestrator | Friday 29 August 2025 21:03:26 +0000 (0:00:00.593) 0:10:44.671 ********* 2025-08-29 21:04:11.440533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 21:04:11.440538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 21:04:11.440542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 21:04:11.440547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 21:04:11.440552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 21:04:11.440556 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.440561 | orchestrator | 2025-08-29 21:04:11.440565 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-08-29 21:04:11.440570 | orchestrator | Friday 29 August 2025 21:03:27 +0000 (0:00:00.574) 0:10:45.246 ********* 2025-08-29 21:04:11.440575 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 21:04:11.440579 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 21:04:11.440584 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 21:04:11.440588 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 21:04:11.440593 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 21:04:11.440597 | orchestrator | 2025-08-29 21:04:11.440602 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-08-29 21:04:11.440607 | orchestrator | Friday 29 August 2025 21:03:58 +0000 (0:00:31.276) 0:11:16.522 ********* 2025-08-29 21:04:11.440611 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.440616 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.440620 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.440625 | orchestrator | 2025-08-29 21:04:11.440630 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-08-29 21:04:11.440634 | orchestrator | Friday 29 August 2025 21:03:58 +0000 (0:00:00.307) 0:11:16.830 ********* 2025-08-29 21:04:11.440639 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.440643 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.440648 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.440652 | orchestrator | 2025-08-29 21:04:11.440657 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-08-29 21:04:11.440661 | orchestrator | Friday 29 August 2025 21:03:59 +0000 (0:00:00.511) 0:11:17.342 ********* 2025-08-29 21:04:11.440668 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.440676 | orchestrator | 2025-08-29 21:04:11.440680 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-08-29 21:04:11.440685 | orchestrator | Friday 29 August 2025 21:03:59 +0000 (0:00:00.513) 0:11:17.856 ********* 2025-08-29 21:04:11.440689 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.440694 | orchestrator | 2025-08-29 21:04:11.440698 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-08-29 21:04:11.440703 | orchestrator | Friday 29 August 2025 21:04:00 +0000 (0:00:00.479) 0:11:18.335 ********* 2025-08-29 21:04:11.440708 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.440712 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.440717 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.440721 | orchestrator | 2025-08-29 21:04:11.440728 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-08-29 21:04:11.440733 | orchestrator | Friday 29 August 2025 21:04:01 +0000 (0:00:01.557) 0:11:19.892 ********* 2025-08-29 21:04:11.440737 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.440742 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.440746 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.440751 | orchestrator | 2025-08-29 21:04:11.440756 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-08-29 21:04:11.440760 | orchestrator | Friday 29 August 2025 21:04:02 +0000 (0:00:01.213) 0:11:21.106 ********* 2025-08-29 21:04:11.440765 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:04:11.440769 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:04:11.440774 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:04:11.440778 | orchestrator | 2025-08-29 21:04:11.440783 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-08-29 21:04:11.440787 | orchestrator | Friday 29 August 2025 21:04:04 +0000 (0:00:01.850) 0:11:22.956 ********* 2025-08-29 21:04:11.440792 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.440797 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.440801 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 21:04:11.440806 | orchestrator | 2025-08-29 21:04:11.440810 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 21:04:11.440815 | orchestrator | Friday 29 August 2025 21:04:07 +0000 (0:00:02.781) 0:11:25.737 ********* 2025-08-29 21:04:11.440819 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.440824 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.440829 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.440833 | orchestrator | 2025-08-29 21:04:11.440838 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 21:04:11.440842 | orchestrator | Friday 29 August 2025 21:04:07 +0000 (0:00:00.356) 0:11:26.094 ********* 2025-08-29 21:04:11.440847 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:04:11.440851 | orchestrator | 2025-08-29 21:04:11.440856 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 21:04:11.440860 | orchestrator | Friday 29 August 2025 21:04:08 +0000 (0:00:00.726) 0:11:26.821 ********* 2025-08-29 21:04:11.440865 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.440870 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.440874 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.440879 | orchestrator | 2025-08-29 21:04:11.440883 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 21:04:11.440888 | orchestrator | Friday 29 August 2025 21:04:08 +0000 (0:00:00.356) 0:11:27.177 ********* 2025-08-29 21:04:11.440892 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.440901 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:04:11.440906 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:04:11.440910 | orchestrator | 2025-08-29 21:04:11.440915 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 21:04:11.440919 | orchestrator | Friday 29 August 2025 21:04:09 +0000 (0:00:00.382) 0:11:27.560 ********* 2025-08-29 21:04:11.440924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:04:11.440928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:04:11.440933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:04:11.440937 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:04:11.440942 | orchestrator | 2025-08-29 21:04:11.440946 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 21:04:11.440951 | orchestrator | Friday 29 August 2025 21:04:10 +0000 (0:00:00.842) 0:11:28.402 ********* 2025-08-29 21:04:11.440955 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:04:11.440960 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:04:11.440964 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:04:11.440969 | orchestrator | 2025-08-29 21:04:11.440974 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:04:11.440989 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-08-29 21:04:11.440994 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-08-29 21:04:11.440998 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-08-29 21:04:11.441005 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-08-29 21:04:11.441010 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-08-29 21:04:11.441014 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-08-29 21:04:11.441019 | orchestrator | 2025-08-29 21:04:11.441024 | orchestrator | 2025-08-29 21:04:11.441028 | orchestrator | 2025-08-29 21:04:11.441033 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:04:11.441037 | orchestrator | Friday 29 August 2025 21:04:10 +0000 (0:00:00.231) 0:11:28.634 ********* 2025-08-29 21:04:11.441044 | orchestrator | =============================================================================== 2025-08-29 21:04:11.441049 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 94.05s 2025-08-29 21:04:11.441054 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 45.41s 2025-08-29 21:04:11.441058 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.28s 2025-08-29 21:04:11.441063 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.18s 2025-08-29 21:04:11.441067 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.21s 2025-08-29 21:04:11.441072 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.01s 2025-08-29 21:04:11.441076 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.50s 2025-08-29 21:04:11.441081 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.22s 2025-08-29 21:04:11.441085 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.75s 2025-08-29 21:04:11.441090 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.38s 2025-08-29 21:04:11.441094 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.19s 2025-08-29 21:04:11.441099 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.32s 2025-08-29 21:04:11.441107 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.03s 2025-08-29 21:04:11.441111 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.00s 2025-08-29 21:04:11.441116 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.08s 2025-08-29 21:04:11.441121 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.90s 2025-08-29 21:04:11.441125 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.74s 2025-08-29 21:04:11.441130 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.73s 2025-08-29 21:04:11.441134 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.46s 2025-08-29 21:04:11.441139 | orchestrator | ceph-mon : Enable ceph-mon.target --------------------------------------- 3.42s 2025-08-29 21:04:11.441143 | orchestrator | 2025-08-29 21:04:11 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:11.441148 | orchestrator | 2025-08-29 21:04:11 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:11.441153 | orchestrator | 2025-08-29 21:04:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:14.473947 | orchestrator | 2025-08-29 21:04:14 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:14.480753 | orchestrator | 2025-08-29 21:04:14 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:14.483705 | orchestrator | 2025-08-29 21:04:14 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:14.483919 | orchestrator | 2025-08-29 21:04:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:17.522314 | orchestrator | 2025-08-29 21:04:17 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:17.525304 | orchestrator | 2025-08-29 21:04:17 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:17.526917 | orchestrator | 2025-08-29 21:04:17 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:17.526951 | orchestrator | 2025-08-29 21:04:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:20.564681 | orchestrator | 2025-08-29 21:04:20 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:20.565358 | orchestrator | 2025-08-29 21:04:20 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:20.566534 | orchestrator | 2025-08-29 21:04:20 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:20.566560 | orchestrator | 2025-08-29 21:04:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:23.609509 | orchestrator | 2025-08-29 21:04:23 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:23.610947 | orchestrator | 2025-08-29 21:04:23 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:23.612463 | orchestrator | 2025-08-29 21:04:23 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:23.612492 | orchestrator | 2025-08-29 21:04:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:26.651905 | orchestrator | 2025-08-29 21:04:26 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:26.659069 | orchestrator | 2025-08-29 21:04:26 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:26.660265 | orchestrator | 2025-08-29 21:04:26 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:26.660350 | orchestrator | 2025-08-29 21:04:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:29.691725 | orchestrator | 2025-08-29 21:04:29 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:29.693526 | orchestrator | 2025-08-29 21:04:29 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:29.696394 | orchestrator | 2025-08-29 21:04:29 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:29.696447 | orchestrator | 2025-08-29 21:04:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:32.733235 | orchestrator | 2025-08-29 21:04:32 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:32.733646 | orchestrator | 2025-08-29 21:04:32 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:32.734806 | orchestrator | 2025-08-29 21:04:32 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:32.736089 | orchestrator | 2025-08-29 21:04:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:35.778525 | orchestrator | 2025-08-29 21:04:35 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:35.780988 | orchestrator | 2025-08-29 21:04:35 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:35.782674 | orchestrator | 2025-08-29 21:04:35 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:35.782702 | orchestrator | 2025-08-29 21:04:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:38.816191 | orchestrator | 2025-08-29 21:04:38 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:38.816756 | orchestrator | 2025-08-29 21:04:38 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:38.817404 | orchestrator | 2025-08-29 21:04:38 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:38.817429 | orchestrator | 2025-08-29 21:04:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:41.861858 | orchestrator | 2025-08-29 21:04:41 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:41.863190 | orchestrator | 2025-08-29 21:04:41 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:41.864784 | orchestrator | 2025-08-29 21:04:41 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:41.865100 | orchestrator | 2025-08-29 21:04:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:44.901343 | orchestrator | 2025-08-29 21:04:44 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state STARTED 2025-08-29 21:04:44.902511 | orchestrator | 2025-08-29 21:04:44 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state STARTED 2025-08-29 21:04:44.904441 | orchestrator | 2025-08-29 21:04:44 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:44.904470 | orchestrator | 2025-08-29 21:04:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:47.954724 | orchestrator | 2025-08-29 21:04:47 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:04:47.956470 | orchestrator | 2025-08-29 21:04:47 | INFO  | Task a9a6123f-9bfc-442c-b1d1-01d5329ac3de is in state SUCCESS 2025-08-29 21:04:47.958611 | orchestrator | 2025-08-29 21:04:47.958665 | orchestrator | 2025-08-29 21:04:47.958678 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:04:47.958690 | orchestrator | 2025-08-29 21:04:47.958729 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:04:47.958741 | orchestrator | Friday 29 August 2025 21:01:42 +0000 (0:00:00.249) 0:00:00.249 ********* 2025-08-29 21:04:47.958752 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.958778 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:47.958789 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:47.958799 | orchestrator | 2025-08-29 21:04:47.958810 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:04:47.958821 | orchestrator | Friday 29 August 2025 21:01:42 +0000 (0:00:00.232) 0:00:00.482 ********* 2025-08-29 21:04:47.958832 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-08-29 21:04:47.958843 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-08-29 21:04:47.958854 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-08-29 21:04:47.958865 | orchestrator | 2025-08-29 21:04:47.958876 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-08-29 21:04:47.958886 | orchestrator | 2025-08-29 21:04:47.958897 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 21:04:47.958907 | orchestrator | Friday 29 August 2025 21:01:42 +0000 (0:00:00.337) 0:00:00.819 ********* 2025-08-29 21:04:47.958919 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:47.958929 | orchestrator | 2025-08-29 21:04:47.958940 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-08-29 21:04:47.958978 | orchestrator | Friday 29 August 2025 21:01:43 +0000 (0:00:00.456) 0:00:01.276 ********* 2025-08-29 21:04:47.958992 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 21:04:47.959002 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 21:04:47.959013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 21:04:47.959023 | orchestrator | 2025-08-29 21:04:47.959034 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-08-29 21:04:47.959044 | orchestrator | Friday 29 August 2025 21:01:44 +0000 (0:00:00.693) 0:00:01.969 ********* 2025-08-29 21:04:47.959059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.959075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.959110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.959131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.959146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.959159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.959178 | orchestrator | 2025-08-29 21:04:47.959191 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 21:04:47.959204 | orchestrator | Friday 29 August 2025 21:01:45 +0000 (0:00:01.573) 0:00:03.542 ********* 2025-08-29 21:04:47.959217 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:47.959229 | orchestrator | 2025-08-29 21:04:47.959242 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-08-29 21:04:47.959254 | orchestrator | Friday 29 August 2025 21:01:46 +0000 (0:00:00.488) 0:00:04.031 ********* 2025-08-29 21:04:47.959279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.959294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.959308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.959322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.959351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.959371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.959384 | orchestrator | 2025-08-29 21:04:47.959398 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-08-29 21:04:47.959410 | orchestrator | Friday 29 August 2025 21:01:49 +0000 (0:00:03.026) 0:00:07.058 ********* 2025-08-29 21:04:47.959423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 21:04:47.959436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 21:04:47.959456 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.959476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 21:04:47.959496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 21:04:47.959510 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.959523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 21:04:47.959537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 21:04:47.959556 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.959567 | orchestrator | 2025-08-29 21:04:47.959579 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-08-29 21:04:47.959589 | orchestrator | Friday 29 August 2025 21:01:50 +0000 (0:00:01.572) 0:00:08.631 ********* 2025-08-29 21:04:47.959607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 21:04:47.959625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 21:04:47.959637 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.959648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 21:04:47.959660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 21:04:47.959678 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.959695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 21:04:47.959715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 21:04:47.959727 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.959738 | orchestrator | 2025-08-29 21:04:47.959748 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-08-29 21:04:47.959759 | orchestrator | Friday 29 August 2025 21:01:51 +0000 (0:00:00.836) 0:00:09.468 ********* 2025-08-29 21:04:47.959771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.959796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.959808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.959832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.959845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.959858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.959884 | orchestrator | 2025-08-29 21:04:47.959895 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-08-29 21:04:47.959906 | orchestrator | Friday 29 August 2025 21:01:53 +0000 (0:00:02.319) 0:00:11.788 ********* 2025-08-29 21:04:47.959917 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:47.959928 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.959939 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:47.959949 | orchestrator | 2025-08-29 21:04:47.959988 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-08-29 21:04:47.960007 | orchestrator | Friday 29 August 2025 21:01:56 +0000 (0:00:02.412) 0:00:14.200 ********* 2025-08-29 21:04:47.960026 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:47.960045 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.960056 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:47.960066 | orchestrator | 2025-08-29 21:04:47.960077 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-08-29 21:04:47.960088 | orchestrator | Friday 29 August 2025 21:01:58 +0000 (0:00:01.862) 0:00:16.062 ********* 2025-08-29 21:04:47.960116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.960128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.960140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 21:04:47.960160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.960179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.960197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 21:04:47.960209 | orchestrator | 2025-08-29 21:04:47.960227 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 21:04:47.960238 | orchestrator | Friday 29 August 2025 21:02:00 +0000 (0:00:02.060) 0:00:18.123 ********* 2025-08-29 21:04:47.960249 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.960259 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.960270 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.960281 | orchestrator | 2025-08-29 21:04:47.960291 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 21:04:47.960302 | orchestrator | Friday 29 August 2025 21:02:00 +0000 (0:00:00.266) 0:00:18.390 ********* 2025-08-29 21:04:47.960313 | orchestrator | 2025-08-29 21:04:47.960323 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 21:04:47.960334 | orchestrator | Friday 29 August 2025 21:02:00 +0000 (0:00:00.091) 0:00:18.481 ********* 2025-08-29 21:04:47.960344 | orchestrator | 2025-08-29 21:04:47.960355 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 21:04:47.960365 | orchestrator | Friday 29 August 2025 21:02:00 +0000 (0:00:00.058) 0:00:18.540 ********* 2025-08-29 21:04:47.960376 | orchestrator | 2025-08-29 21:04:47.960386 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-08-29 21:04:47.960397 | orchestrator | Friday 29 August 2025 21:02:00 +0000 (0:00:00.196) 0:00:18.736 ********* 2025-08-29 21:04:47.960407 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.960418 | orchestrator | 2025-08-29 21:04:47.960429 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-08-29 21:04:47.960439 | orchestrator | Friday 29 August 2025 21:02:01 +0000 (0:00:00.243) 0:00:18.980 ********* 2025-08-29 21:04:47.960450 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.960461 | orchestrator | 2025-08-29 21:04:47.960471 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-08-29 21:04:47.960482 | orchestrator | Friday 29 August 2025 21:02:01 +0000 (0:00:00.306) 0:00:19.286 ********* 2025-08-29 21:04:47.960492 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.960503 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:47.960513 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:47.960524 | orchestrator | 2025-08-29 21:04:47.960534 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-08-29 21:04:47.960545 | orchestrator | Friday 29 August 2025 21:03:07 +0000 (0:01:06.361) 0:01:25.647 ********* 2025-08-29 21:04:47.960556 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.960566 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:47.960577 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:47.960587 | orchestrator | 2025-08-29 21:04:47.960598 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 21:04:47.960608 | orchestrator | Friday 29 August 2025 21:04:34 +0000 (0:01:26.621) 0:02:52.269 ********* 2025-08-29 21:04:47.960619 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:47.960629 | orchestrator | 2025-08-29 21:04:47.960640 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-08-29 21:04:47.960650 | orchestrator | Friday 29 August 2025 21:04:35 +0000 (0:00:00.669) 0:02:52.939 ********* 2025-08-29 21:04:47.960661 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.960672 | orchestrator | 2025-08-29 21:04:47.960682 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-08-29 21:04:47.960693 | orchestrator | Friday 29 August 2025 21:04:37 +0000 (0:00:02.409) 0:02:55.349 ********* 2025-08-29 21:04:47.960703 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.960714 | orchestrator | 2025-08-29 21:04:47.960725 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-08-29 21:04:47.960735 | orchestrator | Friday 29 August 2025 21:04:39 +0000 (0:00:02.164) 0:02:57.513 ********* 2025-08-29 21:04:47.960751 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.960770 | orchestrator | 2025-08-29 21:04:47.960788 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-08-29 21:04:47.960814 | orchestrator | Friday 29 August 2025 21:04:42 +0000 (0:00:02.678) 0:03:00.191 ********* 2025-08-29 21:04:47.960832 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.960850 | orchestrator | 2025-08-29 21:04:47.960876 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:04:47.960896 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 21:04:47.960921 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 21:04:47.960933 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 21:04:47.960944 | orchestrator | 2025-08-29 21:04:47.960982 | orchestrator | 2025-08-29 21:04:47.960999 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:04:47.961017 | orchestrator | Friday 29 August 2025 21:04:44 +0000 (0:00:02.488) 0:03:02.680 ********* 2025-08-29 21:04:47.961031 | orchestrator | =============================================================================== 2025-08-29 21:04:47.961042 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 86.62s 2025-08-29 21:04:47.961052 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.36s 2025-08-29 21:04:47.961063 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.03s 2025-08-29 21:04:47.961074 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.68s 2025-08-29 21:04:47.961084 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.49s 2025-08-29 21:04:47.961095 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.41s 2025-08-29 21:04:47.961105 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.41s 2025-08-29 21:04:47.961116 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.32s 2025-08-29 21:04:47.961127 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.16s 2025-08-29 21:04:47.961137 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.06s 2025-08-29 21:04:47.961148 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.86s 2025-08-29 21:04:47.961159 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.57s 2025-08-29 21:04:47.961169 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.57s 2025-08-29 21:04:47.961180 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.84s 2025-08-29 21:04:47.961190 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.69s 2025-08-29 21:04:47.961201 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.67s 2025-08-29 21:04:47.961212 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2025-08-29 21:04:47.961222 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-08-29 21:04:47.961233 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.35s 2025-08-29 21:04:47.961243 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-08-29 21:04:47.961259 | orchestrator | 2025-08-29 21:04:47 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:04:47.961474 | orchestrator | 2025-08-29 21:04:47 | INFO  | Task 21b54432-1429-4394-81c0-1f698f101169 is in state SUCCESS 2025-08-29 21:04:47.962578 | orchestrator | 2025-08-29 21:04:47.962626 | orchestrator | 2025-08-29 21:04:47.962644 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-08-29 21:04:47.962661 | orchestrator | 2025-08-29 21:04:47.962677 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 21:04:47.962710 | orchestrator | Friday 29 August 2025 21:01:42 +0000 (0:00:00.100) 0:00:00.100 ********* 2025-08-29 21:04:47.962727 | orchestrator | ok: [localhost] => { 2025-08-29 21:04:47.962744 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-08-29 21:04:47.962762 | orchestrator | } 2025-08-29 21:04:47.962779 | orchestrator | 2025-08-29 21:04:47.962796 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-08-29 21:04:47.963144 | orchestrator | Friday 29 August 2025 21:01:42 +0000 (0:00:00.038) 0:00:00.139 ********* 2025-08-29 21:04:47.963166 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-08-29 21:04:47.963184 | orchestrator | ...ignoring 2025-08-29 21:04:47.963201 | orchestrator | 2025-08-29 21:04:47.963217 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-08-29 21:04:47.963233 | orchestrator | Friday 29 August 2025 21:01:44 +0000 (0:00:02.667) 0:00:02.807 ********* 2025-08-29 21:04:47.963248 | orchestrator | skipping: [localhost] 2025-08-29 21:04:47.963264 | orchestrator | 2025-08-29 21:04:47.963279 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-08-29 21:04:47.963295 | orchestrator | Friday 29 August 2025 21:01:44 +0000 (0:00:00.059) 0:00:02.866 ********* 2025-08-29 21:04:47.963434 | orchestrator | ok: [localhost] 2025-08-29 21:04:47.963457 | orchestrator | 2025-08-29 21:04:47.963474 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:04:47.963491 | orchestrator | 2025-08-29 21:04:47.963507 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:04:47.963608 | orchestrator | Friday 29 August 2025 21:01:45 +0000 (0:00:00.124) 0:00:02.991 ********* 2025-08-29 21:04:47.963627 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.963649 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:47.963670 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:47.963686 | orchestrator | 2025-08-29 21:04:47.963702 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:04:47.963718 | orchestrator | Friday 29 August 2025 21:01:45 +0000 (0:00:00.277) 0:00:03.268 ********* 2025-08-29 21:04:47.963742 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 21:04:47.963773 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 21:04:47.963789 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 21:04:47.963805 | orchestrator | 2025-08-29 21:04:47.963819 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 21:04:47.963840 | orchestrator | 2025-08-29 21:04:47.963863 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 21:04:47.963880 | orchestrator | Friday 29 August 2025 21:01:45 +0000 (0:00:00.595) 0:00:03.864 ********* 2025-08-29 21:04:47.963897 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 21:04:47.963913 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 21:04:47.963929 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 21:04:47.963939 | orchestrator | 2025-08-29 21:04:47.963949 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 21:04:47.963993 | orchestrator | Friday 29 August 2025 21:01:46 +0000 (0:00:00.491) 0:00:04.355 ********* 2025-08-29 21:04:47.964003 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:47.964014 | orchestrator | 2025-08-29 21:04:47.964024 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-08-29 21:04:47.964033 | orchestrator | Friday 29 August 2025 21:01:47 +0000 (0:00:00.574) 0:00:04.929 ********* 2025-08-29 21:04:47.964066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 21:04:47.964119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 21:04:47.964146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 21:04:47.964175 | orchestrator | 2025-08-29 21:04:47.964204 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-08-29 21:04:47.964216 | orchestrator | Friday 29 August 2025 21:01:50 +0000 (0:00:03.374) 0:00:08.304 ********* 2025-08-29 21:04:47.964227 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.964238 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.964249 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.964261 | orchestrator | 2025-08-29 21:04:47.964272 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-08-29 21:04:47.964283 | orchestrator | Friday 29 August 2025 21:01:51 +0000 (0:00:00.619) 0:00:08.924 ********* 2025-08-29 21:04:47.964294 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.964305 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.964316 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.964326 | orchestrator | 2025-08-29 21:04:47.964337 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-08-29 21:04:47.964349 | orchestrator | Friday 29 August 2025 21:01:52 +0000 (0:00:01.373) 0:00:10.297 ********* 2025-08-29 21:04:47.964366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 21:04:47.964413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 21:04:47.964440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 21:04:47.964458 | orchestrator | 2025-08-29 21:04:47.964474 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-08-29 21:04:47.964509 | orchestrator | Friday 29 August 2025 21:01:55 +0000 (0:00:02.879) 0:00:13.177 ********* 2025-08-29 21:04:47.964524 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.964540 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.964556 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.964573 | orchestrator | 2025-08-29 21:04:47.964590 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-08-29 21:04:47.964599 | orchestrator | Friday 29 August 2025 21:01:56 +0000 (0:00:01.054) 0:00:14.232 ********* 2025-08-29 21:04:47.964609 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.964618 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:47.964628 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:47.964637 | orchestrator | 2025-08-29 21:04:47.964647 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 21:04:47.964656 | orchestrator | Friday 29 August 2025 21:01:59 +0000 (0:00:03.630) 0:00:17.863 ********* 2025-08-29 21:04:47.964666 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:47.964675 | orchestrator | 2025-08-29 21:04:47.964685 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 21:04:47.964694 | orchestrator | Friday 29 August 2025 21:02:00 +0000 (0:00:00.404) 0:00:18.267 ********* 2025-08-29 21:04:47.964719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:04:47.964738 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.964763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:04:47.964790 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.964820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:04:47.964840 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.964856 | orchestrator | 2025-08-29 21:04:47.964865 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 21:04:47.964875 | orchestrator | Friday 29 August 2025 21:02:03 +0000 (0:00:03.178) 0:00:21.446 ********* 2025-08-29 21:04:47.964891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:04:47.964908 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.964926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:04:47.964937 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.964979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:04:47.965006 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.965016 | orchestrator | 2025-08-29 21:04:47.965026 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 21:04:47.965036 | orchestrator | Friday 29 August 2025 21:02:05 +0000 (0:00:02.191) 0:00:23.638 ********* 2025-08-29 21:04:47.965063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:04:47.965081 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.965192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:04:47.965225 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.965236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 21:04:47.965246 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.965256 | orchestrator | 2025-08-29 21:04:47.965266 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-08-29 21:04:47.965275 | orchestrator | Friday 29 August 2025 21:02:08 +0000 (0:00:02.817) 0:00:26.455 ********* 2025-08-29 21:04:47.965300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 21:04:47.965318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 21:04:47.965338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 21:04:47.965356 | orchestrator | 2025-08-29 21:04:47.965366 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-08-29 21:04:47.965375 | orchestrator | Friday 29 August 2025 21:02:11 +0000 (0:00:02.791) 0:00:29.246 ********* 2025-08-29 21:04:47.965385 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.965395 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:47.965404 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:47.965414 | orchestrator | 2025-08-29 21:04:47.965423 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-08-29 21:04:47.965437 | orchestrator | Friday 29 August 2025 21:02:12 +0000 (0:00:00.974) 0:00:30.220 ********* 2025-08-29 21:04:47.965447 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.965456 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:47.965466 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:47.965475 | orchestrator | 2025-08-29 21:04:47.965485 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-08-29 21:04:47.965494 | orchestrator | Friday 29 August 2025 21:02:12 +0000 (0:00:00.322) 0:00:30.543 ********* 2025-08-29 21:04:47.965504 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.965513 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:47.965523 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:47.965532 | orchestrator | 2025-08-29 21:04:47.965542 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-08-29 21:04:47.965551 | orchestrator | Friday 29 August 2025 21:02:13 +0000 (0:00:00.342) 0:00:30.886 ********* 2025-08-29 21:04:47.965561 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-08-29 21:04:47.965571 | orchestrator | ...ignoring 2025-08-29 21:04:47.965581 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-08-29 21:04:47.965591 | orchestrator | ...ignoring 2025-08-29 21:04:47.965601 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-08-29 21:04:47.965611 | orchestrator | ...ignoring 2025-08-29 21:04:47.965620 | orchestrator | 2025-08-29 21:04:47.965630 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-08-29 21:04:47.965639 | orchestrator | Friday 29 August 2025 21:02:23 +0000 (0:00:10.905) 0:00:41.791 ********* 2025-08-29 21:04:47.965649 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.965658 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:47.965667 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:47.965677 | orchestrator | 2025-08-29 21:04:47.965686 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-08-29 21:04:47.965696 | orchestrator | Friday 29 August 2025 21:02:24 +0000 (0:00:00.638) 0:00:42.429 ********* 2025-08-29 21:04:47.965705 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.965715 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.965724 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.965734 | orchestrator | 2025-08-29 21:04:47.965743 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-08-29 21:04:47.965752 | orchestrator | Friday 29 August 2025 21:02:24 +0000 (0:00:00.444) 0:00:42.874 ********* 2025-08-29 21:04:47.965762 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.965771 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.965786 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.965796 | orchestrator | 2025-08-29 21:04:47.965805 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-08-29 21:04:47.965815 | orchestrator | Friday 29 August 2025 21:02:25 +0000 (0:00:00.448) 0:00:43.322 ********* 2025-08-29 21:04:47.965824 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.965833 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.965843 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.965852 | orchestrator | 2025-08-29 21:04:47.965862 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-08-29 21:04:47.965877 | orchestrator | Friday 29 August 2025 21:02:25 +0000 (0:00:00.459) 0:00:43.782 ********* 2025-08-29 21:04:47.965886 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.965896 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:47.965906 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:47.965915 | orchestrator | 2025-08-29 21:04:47.965925 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-08-29 21:04:47.965934 | orchestrator | Friday 29 August 2025 21:02:26 +0000 (0:00:00.778) 0:00:44.560 ********* 2025-08-29 21:04:47.965944 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.966064 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.966080 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.966090 | orchestrator | 2025-08-29 21:04:47.966099 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 21:04:47.966109 | orchestrator | Friday 29 August 2025 21:02:27 +0000 (0:00:00.436) 0:00:44.997 ********* 2025-08-29 21:04:47.966119 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.966128 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.966138 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-08-29 21:04:47.966148 | orchestrator | 2025-08-29 21:04:47.966157 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-08-29 21:04:47.966167 | orchestrator | Friday 29 August 2025 21:02:27 +0000 (0:00:00.368) 0:00:45.366 ********* 2025-08-29 21:04:47.966176 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.966186 | orchestrator | 2025-08-29 21:04:47.966195 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-08-29 21:04:47.966205 | orchestrator | Friday 29 August 2025 21:02:38 +0000 (0:00:10.817) 0:00:56.184 ********* 2025-08-29 21:04:47.966214 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.966224 | orchestrator | 2025-08-29 21:04:47.966233 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 21:04:47.966243 | orchestrator | Friday 29 August 2025 21:02:38 +0000 (0:00:00.110) 0:00:56.295 ********* 2025-08-29 21:04:47.966254 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.966271 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.966288 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.966303 | orchestrator | 2025-08-29 21:04:47.966319 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-08-29 21:04:47.966343 | orchestrator | Friday 29 August 2025 21:02:39 +0000 (0:00:00.969) 0:00:57.264 ********* 2025-08-29 21:04:47.966361 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.966378 | orchestrator | 2025-08-29 21:04:47.966395 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-08-29 21:04:47.966416 | orchestrator | Friday 29 August 2025 21:02:46 +0000 (0:00:06.943) 0:01:04.208 ********* 2025-08-29 21:04:47.966425 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.966433 | orchestrator | 2025-08-29 21:04:47.966440 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-08-29 21:04:47.966448 | orchestrator | Friday 29 August 2025 21:02:47 +0000 (0:00:01.618) 0:01:05.826 ********* 2025-08-29 21:04:47.966456 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.966464 | orchestrator | 2025-08-29 21:04:47.966471 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-08-29 21:04:47.966486 | orchestrator | Friday 29 August 2025 21:02:50 +0000 (0:00:02.250) 0:01:08.076 ********* 2025-08-29 21:04:47.966494 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.966502 | orchestrator | 2025-08-29 21:04:47.966510 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-08-29 21:04:47.966518 | orchestrator | Friday 29 August 2025 21:02:50 +0000 (0:00:00.101) 0:01:08.177 ********* 2025-08-29 21:04:47.966525 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.966533 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.966541 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.966549 | orchestrator | 2025-08-29 21:04:47.966557 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-08-29 21:04:47.966564 | orchestrator | Friday 29 August 2025 21:02:50 +0000 (0:00:00.398) 0:01:08.576 ********* 2025-08-29 21:04:47.966572 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.966580 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 21:04:47.966588 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:47.966596 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:47.966603 | orchestrator | 2025-08-29 21:04:47.966611 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 21:04:47.966619 | orchestrator | skipping: no hosts matched 2025-08-29 21:04:47.966626 | orchestrator | 2025-08-29 21:04:47.966634 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 21:04:47.966642 | orchestrator | 2025-08-29 21:04:47.966650 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 21:04:47.966657 | orchestrator | Friday 29 August 2025 21:02:50 +0000 (0:00:00.287) 0:01:08.864 ********* 2025-08-29 21:04:47.966665 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:04:47.966673 | orchestrator | 2025-08-29 21:04:47.966680 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 21:04:47.966688 | orchestrator | Friday 29 August 2025 21:03:10 +0000 (0:00:19.175) 0:01:28.039 ********* 2025-08-29 21:04:47.966696 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:47.966703 | orchestrator | 2025-08-29 21:04:47.966711 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 21:04:47.966719 | orchestrator | Friday 29 August 2025 21:03:30 +0000 (0:00:20.570) 0:01:48.610 ********* 2025-08-29 21:04:47.966726 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:47.966734 | orchestrator | 2025-08-29 21:04:47.966742 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 21:04:47.966750 | orchestrator | 2025-08-29 21:04:47.966757 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 21:04:47.966765 | orchestrator | Friday 29 August 2025 21:03:33 +0000 (0:00:02.426) 0:01:51.036 ********* 2025-08-29 21:04:47.966773 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:04:47.966781 | orchestrator | 2025-08-29 21:04:47.966788 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 21:04:47.966804 | orchestrator | Friday 29 August 2025 21:03:56 +0000 (0:00:23.602) 0:02:14.638 ********* 2025-08-29 21:04:47.966812 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:47.966820 | orchestrator | 2025-08-29 21:04:47.966828 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 21:04:47.966836 | orchestrator | Friday 29 August 2025 21:04:12 +0000 (0:00:15.593) 0:02:30.232 ********* 2025-08-29 21:04:47.966843 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:47.966851 | orchestrator | 2025-08-29 21:04:47.966859 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 21:04:47.966867 | orchestrator | 2025-08-29 21:04:47.966881 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 21:04:47.966893 | orchestrator | Friday 29 August 2025 21:04:15 +0000 (0:00:02.706) 0:02:32.938 ********* 2025-08-29 21:04:47.966906 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.966924 | orchestrator | 2025-08-29 21:04:47.966941 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 21:04:47.966983 | orchestrator | Friday 29 August 2025 21:04:30 +0000 (0:00:15.598) 0:02:48.537 ********* 2025-08-29 21:04:47.966999 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.967015 | orchestrator | 2025-08-29 21:04:47.967033 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 21:04:47.967048 | orchestrator | Friday 29 August 2025 21:04:31 +0000 (0:00:00.566) 0:02:49.104 ********* 2025-08-29 21:04:47.967061 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.967071 | orchestrator | 2025-08-29 21:04:47.967079 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 21:04:47.967086 | orchestrator | 2025-08-29 21:04:47.967094 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 21:04:47.967102 | orchestrator | Friday 29 August 2025 21:04:33 +0000 (0:00:02.269) 0:02:51.374 ********* 2025-08-29 21:04:47.967110 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:04:47.967117 | orchestrator | 2025-08-29 21:04:47.967125 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-08-29 21:04:47.967133 | orchestrator | Friday 29 August 2025 21:04:33 +0000 (0:00:00.507) 0:02:51.881 ********* 2025-08-29 21:04:47.967141 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.967149 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.967156 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.967164 | orchestrator | 2025-08-29 21:04:47.967172 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-08-29 21:04:47.967180 | orchestrator | Friday 29 August 2025 21:04:36 +0000 (0:00:02.516) 0:02:54.398 ********* 2025-08-29 21:04:47.967188 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.967196 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.967208 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.967216 | orchestrator | 2025-08-29 21:04:47.967224 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-08-29 21:04:47.967232 | orchestrator | Friday 29 August 2025 21:04:38 +0000 (0:00:02.176) 0:02:56.574 ********* 2025-08-29 21:04:47.967240 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.967248 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.967255 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.967263 | orchestrator | 2025-08-29 21:04:47.967271 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-08-29 21:04:47.967279 | orchestrator | Friday 29 August 2025 21:04:40 +0000 (0:00:02.152) 0:02:58.726 ********* 2025-08-29 21:04:47.967287 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.967294 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.967302 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:04:47.967310 | orchestrator | 2025-08-29 21:04:47.967318 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-08-29 21:04:47.967325 | orchestrator | Friday 29 August 2025 21:04:43 +0000 (0:00:02.163) 0:03:00.890 ********* 2025-08-29 21:04:47.967333 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:04:47.967341 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:04:47.967349 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:04:47.967356 | orchestrator | 2025-08-29 21:04:47.967364 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 21:04:47.967372 | orchestrator | Friday 29 August 2025 21:04:45 +0000 (0:00:02.915) 0:03:03.806 ********* 2025-08-29 21:04:47.967380 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:04:47.967387 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:04:47.967395 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:04:47.967403 | orchestrator | 2025-08-29 21:04:47.967410 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:04:47.967418 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 21:04:47.967426 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-08-29 21:04:47.967442 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 21:04:47.967450 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 21:04:47.967458 | orchestrator | 2025-08-29 21:04:47.967466 | orchestrator | 2025-08-29 21:04:47.967474 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:04:47.967481 | orchestrator | Friday 29 August 2025 21:04:46 +0000 (0:00:00.240) 0:03:04.047 ********* 2025-08-29 21:04:47.967489 | orchestrator | =============================================================================== 2025-08-29 21:04:47.967497 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.78s 2025-08-29 21:04:47.967505 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.16s 2025-08-29 21:04:47.967519 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 15.60s 2025-08-29 21:04:47.967527 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.91s 2025-08-29 21:04:47.967535 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.82s 2025-08-29 21:04:47.967543 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.94s 2025-08-29 21:04:47.967551 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.13s 2025-08-29 21:04:47.967558 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.63s 2025-08-29 21:04:47.967566 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.37s 2025-08-29 21:04:47.967574 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.18s 2025-08-29 21:04:47.967581 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.92s 2025-08-29 21:04:47.967589 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.88s 2025-08-29 21:04:47.967597 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.82s 2025-08-29 21:04:47.967604 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.79s 2025-08-29 21:04:47.967612 | orchestrator | Check MariaDB service --------------------------------------------------- 2.67s 2025-08-29 21:04:47.967620 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.52s 2025-08-29 21:04:47.967628 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.27s 2025-08-29 21:04:47.967635 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.25s 2025-08-29 21:04:47.967643 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.19s 2025-08-29 21:04:47.967651 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.18s 2025-08-29 21:04:47.967658 | orchestrator | 2025-08-29 21:04:47 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:47.967666 | orchestrator | 2025-08-29 21:04:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:51.019296 | orchestrator | 2025-08-29 21:04:51 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:04:51.021682 | orchestrator | 2025-08-29 21:04:51 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:04:51.022992 | orchestrator | 2025-08-29 21:04:51 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:51.023020 | orchestrator | 2025-08-29 21:04:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:54.060067 | orchestrator | 2025-08-29 21:04:54 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:04:54.060337 | orchestrator | 2025-08-29 21:04:54 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:04:54.060379 | orchestrator | 2025-08-29 21:04:54 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:54.060396 | orchestrator | 2025-08-29 21:04:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:04:57.094569 | orchestrator | 2025-08-29 21:04:57 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:04:57.097550 | orchestrator | 2025-08-29 21:04:57 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:04:57.100012 | orchestrator | 2025-08-29 21:04:57 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:04:57.100723 | orchestrator | 2025-08-29 21:04:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:00.143457 | orchestrator | 2025-08-29 21:05:00 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:00.144643 | orchestrator | 2025-08-29 21:05:00 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:00.146173 | orchestrator | 2025-08-29 21:05:00 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:00.146201 | orchestrator | 2025-08-29 21:05:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:03.182460 | orchestrator | 2025-08-29 21:05:03 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:03.184029 | orchestrator | 2025-08-29 21:05:03 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:03.185622 | orchestrator | 2025-08-29 21:05:03 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:03.186149 | orchestrator | 2025-08-29 21:05:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:06.218095 | orchestrator | 2025-08-29 21:05:06 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:06.218448 | orchestrator | 2025-08-29 21:05:06 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:06.218478 | orchestrator | 2025-08-29 21:05:06 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:06.218491 | orchestrator | 2025-08-29 21:05:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:09.249273 | orchestrator | 2025-08-29 21:05:09 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:09.250620 | orchestrator | 2025-08-29 21:05:09 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:09.251444 | orchestrator | 2025-08-29 21:05:09 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:09.251646 | orchestrator | 2025-08-29 21:05:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:12.284179 | orchestrator | 2025-08-29 21:05:12 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:12.284265 | orchestrator | 2025-08-29 21:05:12 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:12.284851 | orchestrator | 2025-08-29 21:05:12 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:12.284877 | orchestrator | 2025-08-29 21:05:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:15.321645 | orchestrator | 2025-08-29 21:05:15 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:15.323346 | orchestrator | 2025-08-29 21:05:15 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:15.325019 | orchestrator | 2025-08-29 21:05:15 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:15.325321 | orchestrator | 2025-08-29 21:05:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:18.375315 | orchestrator | 2025-08-29 21:05:18 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:18.375433 | orchestrator | 2025-08-29 21:05:18 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:18.375447 | orchestrator | 2025-08-29 21:05:18 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:18.375459 | orchestrator | 2025-08-29 21:05:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:21.424564 | orchestrator | 2025-08-29 21:05:21 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:21.426094 | orchestrator | 2025-08-29 21:05:21 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:21.427339 | orchestrator | 2025-08-29 21:05:21 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:21.427368 | orchestrator | 2025-08-29 21:05:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:24.475796 | orchestrator | 2025-08-29 21:05:24 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:24.476215 | orchestrator | 2025-08-29 21:05:24 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:24.478159 | orchestrator | 2025-08-29 21:05:24 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:24.478184 | orchestrator | 2025-08-29 21:05:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:27.521224 | orchestrator | 2025-08-29 21:05:27 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:27.523158 | orchestrator | 2025-08-29 21:05:27 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:27.524890 | orchestrator | 2025-08-29 21:05:27 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:27.524920 | orchestrator | 2025-08-29 21:05:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:30.567181 | orchestrator | 2025-08-29 21:05:30 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:30.567281 | orchestrator | 2025-08-29 21:05:30 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:30.567296 | orchestrator | 2025-08-29 21:05:30 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:30.567309 | orchestrator | 2025-08-29 21:05:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:33.612384 | orchestrator | 2025-08-29 21:05:33 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:33.613847 | orchestrator | 2025-08-29 21:05:33 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:33.614948 | orchestrator | 2025-08-29 21:05:33 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:33.615001 | orchestrator | 2025-08-29 21:05:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:36.665346 | orchestrator | 2025-08-29 21:05:36 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:36.665450 | orchestrator | 2025-08-29 21:05:36 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:36.665497 | orchestrator | 2025-08-29 21:05:36 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:36.665511 | orchestrator | 2025-08-29 21:05:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:39.712831 | orchestrator | 2025-08-29 21:05:39 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:39.712932 | orchestrator | 2025-08-29 21:05:39 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:39.714005 | orchestrator | 2025-08-29 21:05:39 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:39.714086 | orchestrator | 2025-08-29 21:05:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:42.760428 | orchestrator | 2025-08-29 21:05:42 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:42.762175 | orchestrator | 2025-08-29 21:05:42 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:42.765131 | orchestrator | 2025-08-29 21:05:42 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:42.765168 | orchestrator | 2025-08-29 21:05:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:45.804639 | orchestrator | 2025-08-29 21:05:45 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:45.805682 | orchestrator | 2025-08-29 21:05:45 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:45.807363 | orchestrator | 2025-08-29 21:05:45 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:45.807397 | orchestrator | 2025-08-29 21:05:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:48.852639 | orchestrator | 2025-08-29 21:05:48 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:48.854760 | orchestrator | 2025-08-29 21:05:48 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:48.856435 | orchestrator | 2025-08-29 21:05:48 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:48.856445 | orchestrator | 2025-08-29 21:05:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:51.901661 | orchestrator | 2025-08-29 21:05:51 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:51.902956 | orchestrator | 2025-08-29 21:05:51 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:51.904795 | orchestrator | 2025-08-29 21:05:51 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:51.904839 | orchestrator | 2025-08-29 21:05:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:54.953786 | orchestrator | 2025-08-29 21:05:54 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:54.956652 | orchestrator | 2025-08-29 21:05:54 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:54.963435 | orchestrator | 2025-08-29 21:05:54 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:54.963931 | orchestrator | 2025-08-29 21:05:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:05:58.008630 | orchestrator | 2025-08-29 21:05:58 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:05:58.009853 | orchestrator | 2025-08-29 21:05:58 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:05:58.012387 | orchestrator | 2025-08-29 21:05:58 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:05:58.012503 | orchestrator | 2025-08-29 21:05:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:01.049877 | orchestrator | 2025-08-29 21:06:01 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:01.050550 | orchestrator | 2025-08-29 21:06:01 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:01.052047 | orchestrator | 2025-08-29 21:06:01 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:06:01.052072 | orchestrator | 2025-08-29 21:06:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:04.094475 | orchestrator | 2025-08-29 21:06:04 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:04.095261 | orchestrator | 2025-08-29 21:06:04 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:04.096602 | orchestrator | 2025-08-29 21:06:04 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:06:04.096817 | orchestrator | 2025-08-29 21:06:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:07.143565 | orchestrator | 2025-08-29 21:06:07 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:07.146337 | orchestrator | 2025-08-29 21:06:07 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:07.148431 | orchestrator | 2025-08-29 21:06:07 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:06:07.148503 | orchestrator | 2025-08-29 21:06:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:10.209595 | orchestrator | 2025-08-29 21:06:10 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:10.212612 | orchestrator | 2025-08-29 21:06:10 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:10.214394 | orchestrator | 2025-08-29 21:06:10 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:06:10.214437 | orchestrator | 2025-08-29 21:06:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:13.262595 | orchestrator | 2025-08-29 21:06:13 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:13.263900 | orchestrator | 2025-08-29 21:06:13 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:13.265719 | orchestrator | 2025-08-29 21:06:13 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:06:13.265971 | orchestrator | 2025-08-29 21:06:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:16.315755 | orchestrator | 2025-08-29 21:06:16 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:16.317296 | orchestrator | 2025-08-29 21:06:16 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:16.319899 | orchestrator | 2025-08-29 21:06:16 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:06:16.319956 | orchestrator | 2025-08-29 21:06:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:19.360995 | orchestrator | 2025-08-29 21:06:19 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:19.362780 | orchestrator | 2025-08-29 21:06:19 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:19.363955 | orchestrator | 2025-08-29 21:06:19 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:06:19.364047 | orchestrator | 2025-08-29 21:06:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:22.401601 | orchestrator | 2025-08-29 21:06:22 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:22.402951 | orchestrator | 2025-08-29 21:06:22 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:22.405326 | orchestrator | 2025-08-29 21:06:22 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state STARTED 2025-08-29 21:06:22.405401 | orchestrator | 2025-08-29 21:06:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:25.442846 | orchestrator | 2025-08-29 21:06:25 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:25.443513 | orchestrator | 2025-08-29 21:06:25 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:25.445734 | orchestrator | 2025-08-29 21:06:25 | INFO  | Task 0225c1c4-1345-492f-83bd-ec465cb92dc3 is in state SUCCESS 2025-08-29 21:06:25.447461 | orchestrator | 2025-08-29 21:06:25.448261 | orchestrator | 2025-08-29 21:06:25.448290 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-08-29 21:06:25.448303 | orchestrator | 2025-08-29 21:06:25.448314 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 21:06:25.448326 | orchestrator | Friday 29 August 2025 21:04:15 +0000 (0:00:00.585) 0:00:00.585 ********* 2025-08-29 21:06:25.448338 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:06:25.448352 | orchestrator | 2025-08-29 21:06:25.448362 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 21:06:25.448372 | orchestrator | Friday 29 August 2025 21:04:15 +0000 (0:00:00.651) 0:00:01.236 ********* 2025-08-29 21:06:25.448382 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.448393 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.448403 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.448413 | orchestrator | 2025-08-29 21:06:25.448423 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 21:06:25.448432 | orchestrator | Friday 29 August 2025 21:04:16 +0000 (0:00:00.734) 0:00:01.971 ********* 2025-08-29 21:06:25.448442 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.448451 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.448461 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.448470 | orchestrator | 2025-08-29 21:06:25.448480 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 21:06:25.448490 | orchestrator | Friday 29 August 2025 21:04:16 +0000 (0:00:00.274) 0:00:02.245 ********* 2025-08-29 21:06:25.448499 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.448509 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.448518 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.448528 | orchestrator | 2025-08-29 21:06:25.448537 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 21:06:25.448547 | orchestrator | Friday 29 August 2025 21:04:17 +0000 (0:00:00.712) 0:00:02.958 ********* 2025-08-29 21:06:25.448557 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.448584 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.448604 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.448616 | orchestrator | 2025-08-29 21:06:25.448634 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 21:06:25.448651 | orchestrator | Friday 29 August 2025 21:04:17 +0000 (0:00:00.320) 0:00:03.278 ********* 2025-08-29 21:06:25.448666 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.448682 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.448697 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.448714 | orchestrator | 2025-08-29 21:06:25.448729 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 21:06:25.448745 | orchestrator | Friday 29 August 2025 21:04:18 +0000 (0:00:00.287) 0:00:03.566 ********* 2025-08-29 21:06:25.448763 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.448817 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.448835 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.448851 | orchestrator | 2025-08-29 21:06:25.448867 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 21:06:25.448902 | orchestrator | Friday 29 August 2025 21:04:18 +0000 (0:00:00.299) 0:00:03.866 ********* 2025-08-29 21:06:25.448919 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.448936 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.448948 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.448959 | orchestrator | 2025-08-29 21:06:25.448971 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 21:06:25.448982 | orchestrator | Friday 29 August 2025 21:04:18 +0000 (0:00:00.467) 0:00:04.333 ********* 2025-08-29 21:06:25.448993 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.449029 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.449040 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.449051 | orchestrator | 2025-08-29 21:06:25.449062 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 21:06:25.449074 | orchestrator | Friday 29 August 2025 21:04:19 +0000 (0:00:00.297) 0:00:04.630 ********* 2025-08-29 21:06:25.449085 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 21:06:25.449096 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 21:06:25.449107 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 21:06:25.449118 | orchestrator | 2025-08-29 21:06:25.449129 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 21:06:25.449140 | orchestrator | Friday 29 August 2025 21:04:19 +0000 (0:00:00.633) 0:00:05.264 ********* 2025-08-29 21:06:25.449152 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.449164 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.449174 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.449185 | orchestrator | 2025-08-29 21:06:25.449197 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 21:06:25.449208 | orchestrator | Friday 29 August 2025 21:04:20 +0000 (0:00:00.378) 0:00:05.642 ********* 2025-08-29 21:06:25.449219 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 21:06:25.449229 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 21:06:25.449239 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 21:06:25.449249 | orchestrator | 2025-08-29 21:06:25.449258 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 21:06:25.449268 | orchestrator | Friday 29 August 2025 21:04:22 +0000 (0:00:01.930) 0:00:07.572 ********* 2025-08-29 21:06:25.449278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 21:06:25.449288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 21:06:25.449298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 21:06:25.449308 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.449317 | orchestrator | 2025-08-29 21:06:25.449327 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 21:06:25.449386 | orchestrator | Friday 29 August 2025 21:04:22 +0000 (0:00:00.375) 0:00:07.948 ********* 2025-08-29 21:06:25.449401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.449414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.449435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.449444 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.449454 | orchestrator | 2025-08-29 21:06:25.449468 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 21:06:25.449484 | orchestrator | Friday 29 August 2025 21:04:23 +0000 (0:00:00.681) 0:00:08.629 ********* 2025-08-29 21:06:25.449503 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.449523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.449548 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.449565 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.449582 | orchestrator | 2025-08-29 21:06:25.449600 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 21:06:25.449616 | orchestrator | Friday 29 August 2025 21:04:23 +0000 (0:00:00.138) 0:00:08.768 ********* 2025-08-29 21:06:25.449636 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '09b265cfb805', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 21:04:20.781986', 'end': '2025-08-29 21:04:20.814906', 'delta': '0:00:00.032920', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['09b265cfb805'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-08-29 21:06:25.449660 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9852d91387b6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 21:04:21.413411', 'end': '2025-08-29 21:04:21.452330', 'delta': '0:00:00.038919', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9852d91387b6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-08-29 21:06:25.449724 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f96ce5261eb1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 21:04:21.949726', 'end': '2025-08-29 21:04:21.987338', 'delta': '0:00:00.037612', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f96ce5261eb1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-08-29 21:06:25.449755 | orchestrator | 2025-08-29 21:06:25.449765 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 21:06:25.449775 | orchestrator | Friday 29 August 2025 21:04:23 +0000 (0:00:00.265) 0:00:09.034 ********* 2025-08-29 21:06:25.449785 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.449794 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.449804 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.449814 | orchestrator | 2025-08-29 21:06:25.449824 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 21:06:25.449833 | orchestrator | Friday 29 August 2025 21:04:23 +0000 (0:00:00.384) 0:00:09.418 ********* 2025-08-29 21:06:25.449843 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-08-29 21:06:25.449853 | orchestrator | 2025-08-29 21:06:25.449863 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 21:06:25.449872 | orchestrator | Friday 29 August 2025 21:04:25 +0000 (0:00:01.826) 0:00:11.245 ********* 2025-08-29 21:06:25.449882 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.449892 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.449902 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.449911 | orchestrator | 2025-08-29 21:06:25.449921 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 21:06:25.449931 | orchestrator | Friday 29 August 2025 21:04:26 +0000 (0:00:00.244) 0:00:11.489 ********* 2025-08-29 21:06:25.449941 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.449950 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.449960 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.449969 | orchestrator | 2025-08-29 21:06:25.449979 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 21:06:25.449989 | orchestrator | Friday 29 August 2025 21:04:26 +0000 (0:00:00.371) 0:00:11.861 ********* 2025-08-29 21:06:25.450067 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.450080 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.450090 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.450100 | orchestrator | 2025-08-29 21:06:25.450110 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 21:06:25.450126 | orchestrator | Friday 29 August 2025 21:04:26 +0000 (0:00:00.362) 0:00:12.223 ********* 2025-08-29 21:06:25.450136 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.450146 | orchestrator | 2025-08-29 21:06:25.450157 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 21:06:25.450174 | orchestrator | Friday 29 August 2025 21:04:26 +0000 (0:00:00.116) 0:00:12.339 ********* 2025-08-29 21:06:25.450190 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.450207 | orchestrator | 2025-08-29 21:06:25.450223 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 21:06:25.450239 | orchestrator | Friday 29 August 2025 21:04:27 +0000 (0:00:00.202) 0:00:12.541 ********* 2025-08-29 21:06:25.450256 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.450272 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.450290 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.450308 | orchestrator | 2025-08-29 21:06:25.450328 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 21:06:25.450346 | orchestrator | Friday 29 August 2025 21:04:27 +0000 (0:00:00.240) 0:00:12.782 ********* 2025-08-29 21:06:25.450363 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.450379 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.450409 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.450427 | orchestrator | 2025-08-29 21:06:25.450438 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 21:06:25.450448 | orchestrator | Friday 29 August 2025 21:04:27 +0000 (0:00:00.274) 0:00:13.056 ********* 2025-08-29 21:06:25.450457 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.450467 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.450477 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.450486 | orchestrator | 2025-08-29 21:06:25.450496 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 21:06:25.450505 | orchestrator | Friday 29 August 2025 21:04:27 +0000 (0:00:00.386) 0:00:13.443 ********* 2025-08-29 21:06:25.450515 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.450524 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.450534 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.450543 | orchestrator | 2025-08-29 21:06:25.450553 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 21:06:25.450563 | orchestrator | Friday 29 August 2025 21:04:28 +0000 (0:00:00.274) 0:00:13.718 ********* 2025-08-29 21:06:25.450572 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.450582 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.450592 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.450601 | orchestrator | 2025-08-29 21:06:25.450611 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 21:06:25.450620 | orchestrator | Friday 29 August 2025 21:04:28 +0000 (0:00:00.277) 0:00:13.995 ********* 2025-08-29 21:06:25.450630 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.450639 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.450649 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.450659 | orchestrator | 2025-08-29 21:06:25.450668 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 21:06:25.450715 | orchestrator | Friday 29 August 2025 21:04:28 +0000 (0:00:00.267) 0:00:14.263 ********* 2025-08-29 21:06:25.450727 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.450737 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.450746 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.450756 | orchestrator | 2025-08-29 21:06:25.450765 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 21:06:25.450775 | orchestrator | Friday 29 August 2025 21:04:29 +0000 (0:00:00.461) 0:00:14.725 ********* 2025-08-29 21:06:25.450786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0-osd--block--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0', 'dm-uuid-LVM-Fe5paP4RaHCNTyOtYUd48D6X5xKxbgN5ZF9ViuG9w6ObaWaik0UusXTfQv6Upnj5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.450799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79476f9b--63cb--5c74--926b--50a3eb682c43-osd--block--79476f9b--63cb--5c74--926b--50a3eb682c43', 'dm-uuid-LVM-lGPIba1XCmCrdedZxItRlQ5wsxJuKeX73qUmJ1hQjhylmCIxoVBMqqptLe6gyQix'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.450810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.450834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.450844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.450855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.450865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.450899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.450911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.450921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.450939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part1', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part14', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part15', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part16', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.450960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0-osd--block--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dw20gd-69Qv-eKty-yH3R-4JPQ-wBw3-2SigSk', 'scsi-0QEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4', 'scsi-SQEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.450995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76a76f98--f10a--56c2--85c8--c111ab4c87c6-osd--block--76a76f98--f10a--56c2--85c8--c111ab4c87c6', 'dm-uuid-LVM-ZlF2XCDYZD1UtTLH1LhhUrb6phYn0u1WeQqWw9uj3pc9o5aJ38s0WNm1vGaeuKzj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--79476f9b--63cb--5c74--926b--50a3eb682c43-osd--block--79476f9b--63cb--5c74--926b--50a3eb682c43'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xyw5Vv-Jx4o-JuFH-yepn-cBG4-Zh8H-ZbIRS8', 'scsi-0QEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf', 'scsi-SQEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f3fee7d3--6bcf--515f--a6c3--caef0862fd99-osd--block--f3fee7d3--6bcf--515f--a6c3--caef0862fd99', 'dm-uuid-LVM-vIAxX0t2ryPCpAFoQJVGLUBsLLDK0CquaWCihnEnaolpdeNnFEztlu7vEUNpbjy2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346', 'scsi-SQEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451293 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.451324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--76a76f98--f10a--56c2--85c8--c111ab4c87c6-osd--block--76a76f98--f10a--56c2--85c8--c111ab4c87c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WAwVmE-n97f-xQhq-5QXz-u2uN-qAiK-XLuUKK', 'scsi-0QEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2', 'scsi-SQEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--275f26f1--4e1c--5372--9190--a1521a972d04-osd--block--275f26f1--4e1c--5372--9190--a1521a972d04', 'dm-uuid-LVM-UfJRkDX0mNOpRn9nwFOha60VYmAVXjDX2XHEdZANKeX1Quek4W897jXn2caXrs1x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f3fee7d3--6bcf--515f--a6c3--caef0862fd99-osd--block--f3fee7d3--6bcf--515f--a6c3--caef0862fd99'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ITcX7v-glWJ-T3DR-3wBI-Q3pc-2gFu-ScaBKL', 'scsi-0QEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042', 'scsi-SQEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5db720f--fb16--50b5--adff--95cbe6288183-osd--block--c5db720f--fb16--50b5--adff--95cbe6288183', 'dm-uuid-LVM-e8VEXzThEhG23c1FWIDl5qgfhvlMa1sxAi5EyN7eYryES4U80WDiFO8vV4ZFfpdS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df', 'scsi-SQEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451470 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.451480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 21:06:25.451552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part1', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part14', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part15', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part16', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--275f26f1--4e1c--5372--9190--a1521a972d04-osd--block--275f26f1--4e1c--5372--9190--a1521a972d04'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ABygGI-gKWl-Ooen-nD1A-WWd5-W18E-XUWZuU', 'scsi-0QEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88', 'scsi-SQEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c5db720f--fb16--50b5--adff--95cbe6288183-osd--block--c5db720f--fb16--50b5--adff--95cbe6288183'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JUvZKB-Jhl3-cL3g-mivQ-F4rC-Vby2-gmCMXi', 'scsi-0QEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59', 'scsi-SQEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe', 'scsi-SQEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 21:06:25.451622 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.451632 | orchestrator | 2025-08-29 21:06:25.451642 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 21:06:25.451651 | orchestrator | Friday 29 August 2025 21:04:29 +0000 (0:00:00.507) 0:00:15.232 ********* 2025-08-29 21:06:25.451662 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0-osd--block--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0', 'dm-uuid-LVM-Fe5paP4RaHCNTyOtYUd48D6X5xKxbgN5ZF9ViuG9w6ObaWaik0UusXTfQv6Upnj5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79476f9b--63cb--5c74--926b--50a3eb682c43-osd--block--79476f9b--63cb--5c74--926b--50a3eb682c43', 'dm-uuid-LVM-lGPIba1XCmCrdedZxItRlQ5wsxJuKeX73qUmJ1hQjhylmCIxoVBMqqptLe6gyQix'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451714 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76a76f98--f10a--56c2--85c8--c111ab4c87c6-osd--block--76a76f98--f10a--56c2--85c8--c111ab4c87c6', 'dm-uuid-LVM-ZlF2XCDYZD1UtTLH1LhhUrb6phYn0u1WeQqWw9uj3pc9o5aJ38s0WNm1vGaeuKzj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part1', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part14', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part15', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part16', 'scsi-SQEMU_QEMU_HARDDISK_14df01a6-19d6-409d-8aac-29053c3f8745-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451851 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f3fee7d3--6bcf--515f--a6c3--caef0862fd99-osd--block--f3fee7d3--6bcf--515f--a6c3--caef0862fd99', 'dm-uuid-LVM-vIAxX0t2ryPCpAFoQJVGLUBsLLDK0CquaWCihnEnaolpdeNnFEztlu7vEUNpbjy2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0-osd--block--028c3e14--b13d--554d--9ec8--e0bdecd4a1f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dw20gd-69Qv-eKty-yH3R-4JPQ-wBw3-2SigSk', 'scsi-0QEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4', 'scsi-SQEMU_QEMU_HARDDISK_87912232-aa7c-4262-871d-9bc5d73b0ac4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451887 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451921 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--79476f9b--63cb--5c74--926b--50a3eb682c43-osd--block--79476f9b--63cb--5c74--926b--50a3eb682c43'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xyw5Vv-Jx4o-JuFH-yepn-cBG4-Zh8H-ZbIRS8', 'scsi-0QEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf', 'scsi-SQEMU_QEMU_HARDDISK_8de48b33-02fa-44df-ab75-fb3adc163aaf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451940 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346', 'scsi-SQEMU_QEMU_HARDDISK_b39085cf-2099-4337-b75a-480912a54346'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.451982 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452035 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452055 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.452083 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452118 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452161 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452179 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a3e5fad-a271-4574-87a4-9e8d4d1d75c0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452217 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--76a76f98--f10a--56c2--85c8--c111ab4c87c6-osd--block--76a76f98--f10a--56c2--85c8--c111ab4c87c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WAwVmE-n97f-xQhq-5QXz-u2uN-qAiK-XLuUKK', 'scsi-0QEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2', 'scsi-SQEMU_QEMU_HARDDISK_51de580c-8abc-4940-b3c7-576b20a2ecb2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452232 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--275f26f1--4e1c--5372--9190--a1521a972d04-osd--block--275f26f1--4e1c--5372--9190--a1521a972d04', 'dm-uuid-LVM-UfJRkDX0mNOpRn9nwFOha60VYmAVXjDX2XHEdZANKeX1Quek4W897jXn2caXrs1x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452243 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5db720f--fb16--50b5--adff--95cbe6288183-osd--block--c5db720f--fb16--50b5--adff--95cbe6288183', 'dm-uuid-LVM-e8VEXzThEhG23c1FWIDl5qgfhvlMa1sxAi5EyN7eYryES4U80WDiFO8vV4ZFfpdS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452259 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f3fee7d3--6bcf--515f--a6c3--caef0862fd99-osd--block--f3fee7d3--6bcf--515f--a6c3--caef0862fd99'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ITcX7v-glWJ-T3DR-3wBI-Q3pc-2gFu-ScaBKL', 'scsi-0QEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042', 'scsi-SQEMU_QEMU_HARDDISK_02349b33-ae7e-4f46-b237-ffaefc5b0042'], 'labels': [], 'masters': 2025-08-29 21:06:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:25.452277 | orchestrator | ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452288 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452298 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df', 'scsi-SQEMU_QEMU_HARDDISK_0fdcfb5c-5644-43f4-9439-4c34089784df'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452312 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452323 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452333 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.452343 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452367 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452378 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452388 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452403 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452413 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452430 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part1', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part14', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part15', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part16', 'scsi-SQEMU_QEMU_HARDDISK_85cdd01a-0b69-40c0-874a-0ae950f34a38-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452449 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--275f26f1--4e1c--5372--9190--a1521a972d04-osd--block--275f26f1--4e1c--5372--9190--a1521a972d04'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ABygGI-gKWl-Ooen-nD1A-WWd5-W18E-XUWZuU', 'scsi-0QEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88', 'scsi-SQEMU_QEMU_HARDDISK_bf74f504-ac7d-4b49-a722-26f61d318d88'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452463 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c5db720f--fb16--50b5--adff--95cbe6288183-osd--block--c5db720f--fb16--50b5--adff--95cbe6288183'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JUvZKB-Jhl3-cL3g-mivQ-F4rC-Vby2-gmCMXi', 'scsi-0QEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59', 'scsi-SQEMU_QEMU_HARDDISK_3da68947-c337-4052-9861-a1ec6021be59'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452474 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe', 'scsi-SQEMU_QEMU_HARDDISK_9a372554-c439-41ad-8970-95d88d0b4dbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-20-13-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 21:06:25.452508 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.452518 | orchestrator | 2025-08-29 21:06:25.452533 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 21:06:25.452549 | orchestrator | Friday 29 August 2025 21:04:30 +0000 (0:00:00.550) 0:00:15.783 ********* 2025-08-29 21:06:25.452565 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.452582 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.452597 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.452613 | orchestrator | 2025-08-29 21:06:25.452628 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 21:06:25.452644 | orchestrator | Friday 29 August 2025 21:04:30 +0000 (0:00:00.624) 0:00:16.407 ********* 2025-08-29 21:06:25.452660 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.452675 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.452692 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.452710 | orchestrator | 2025-08-29 21:06:25.452728 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 21:06:25.452746 | orchestrator | Friday 29 August 2025 21:04:31 +0000 (0:00:00.348) 0:00:16.756 ********* 2025-08-29 21:06:25.452762 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.452778 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.452795 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.452812 | orchestrator | 2025-08-29 21:06:25.452828 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 21:06:25.452844 | orchestrator | Friday 29 August 2025 21:04:32 +0000 (0:00:00.698) 0:00:17.454 ********* 2025-08-29 21:06:25.452854 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.452864 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.452874 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.452883 | orchestrator | 2025-08-29 21:06:25.452893 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 21:06:25.452902 | orchestrator | Friday 29 August 2025 21:04:32 +0000 (0:00:00.305) 0:00:17.760 ********* 2025-08-29 21:06:25.452912 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.452921 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.452931 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.452940 | orchestrator | 2025-08-29 21:06:25.452950 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 21:06:25.452959 | orchestrator | Friday 29 August 2025 21:04:32 +0000 (0:00:00.396) 0:00:18.156 ********* 2025-08-29 21:06:25.452968 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.452978 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.452987 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.453064 | orchestrator | 2025-08-29 21:06:25.453083 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 21:06:25.453100 | orchestrator | Friday 29 August 2025 21:04:33 +0000 (0:00:00.460) 0:00:18.616 ********* 2025-08-29 21:06:25.453116 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 21:06:25.453134 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 21:06:25.453150 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 21:06:25.453167 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 21:06:25.453186 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 21:06:25.453205 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 21:06:25.453223 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 21:06:25.453241 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 21:06:25.453257 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 21:06:25.453275 | orchestrator | 2025-08-29 21:06:25.453285 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 21:06:25.453295 | orchestrator | Friday 29 August 2025 21:04:34 +0000 (0:00:00.847) 0:00:19.464 ********* 2025-08-29 21:06:25.453304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 21:06:25.453314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 21:06:25.453323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 21:06:25.453333 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.453342 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 21:06:25.453351 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 21:06:25.453361 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 21:06:25.453370 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.453379 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 21:06:25.453389 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 21:06:25.453398 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 21:06:25.453407 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.453417 | orchestrator | 2025-08-29 21:06:25.453426 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 21:06:25.453435 | orchestrator | Friday 29 August 2025 21:04:34 +0000 (0:00:00.366) 0:00:19.830 ********* 2025-08-29 21:06:25.453445 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:06:25.453455 | orchestrator | 2025-08-29 21:06:25.453465 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 21:06:25.453475 | orchestrator | Friday 29 August 2025 21:04:35 +0000 (0:00:00.653) 0:00:20.483 ********* 2025-08-29 21:06:25.453493 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.453503 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.453513 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.453522 | orchestrator | 2025-08-29 21:06:25.453532 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 21:06:25.453541 | orchestrator | Friday 29 August 2025 21:04:35 +0000 (0:00:00.308) 0:00:20.791 ********* 2025-08-29 21:06:25.453550 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.453560 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.453569 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.453579 | orchestrator | 2025-08-29 21:06:25.453588 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 21:06:25.453598 | orchestrator | Friday 29 August 2025 21:04:35 +0000 (0:00:00.289) 0:00:21.081 ********* 2025-08-29 21:06:25.453607 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.453617 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.453632 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:06:25.453640 | orchestrator | 2025-08-29 21:06:25.453648 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 21:06:25.453655 | orchestrator | Friday 29 August 2025 21:04:35 +0000 (0:00:00.325) 0:00:21.407 ********* 2025-08-29 21:06:25.453663 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.453671 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.453679 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.453687 | orchestrator | 2025-08-29 21:06:25.453695 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 21:06:25.453702 | orchestrator | Friday 29 August 2025 21:04:36 +0000 (0:00:00.582) 0:00:21.990 ********* 2025-08-29 21:06:25.453710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:06:25.453718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:06:25.453725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:06:25.453733 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.453741 | orchestrator | 2025-08-29 21:06:25.453755 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 21:06:25.453768 | orchestrator | Friday 29 August 2025 21:04:36 +0000 (0:00:00.360) 0:00:22.350 ********* 2025-08-29 21:06:25.453780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:06:25.453793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:06:25.453807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:06:25.453819 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.453832 | orchestrator | 2025-08-29 21:06:25.453846 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 21:06:25.453861 | orchestrator | Friday 29 August 2025 21:04:37 +0000 (0:00:00.344) 0:00:22.694 ********* 2025-08-29 21:06:25.453876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 21:06:25.453891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 21:06:25.453910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 21:06:25.453925 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.453938 | orchestrator | 2025-08-29 21:06:25.453951 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 21:06:25.453965 | orchestrator | Friday 29 August 2025 21:04:37 +0000 (0:00:00.340) 0:00:23.035 ********* 2025-08-29 21:06:25.453978 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:06:25.453992 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:06:25.454045 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:06:25.454054 | orchestrator | 2025-08-29 21:06:25.454062 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 21:06:25.454070 | orchestrator | Friday 29 August 2025 21:04:37 +0000 (0:00:00.327) 0:00:23.363 ********* 2025-08-29 21:06:25.454078 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 21:06:25.454086 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 21:06:25.454094 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 21:06:25.454102 | orchestrator | 2025-08-29 21:06:25.454110 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 21:06:25.454118 | orchestrator | Friday 29 August 2025 21:04:38 +0000 (0:00:00.494) 0:00:23.857 ********* 2025-08-29 21:06:25.454126 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 21:06:25.454134 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 21:06:25.454142 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 21:06:25.454150 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 21:06:25.454158 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 21:06:25.454167 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 21:06:25.454185 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 21:06:25.454194 | orchestrator | 2025-08-29 21:06:25.454202 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 21:06:25.454210 | orchestrator | Friday 29 August 2025 21:04:39 +0000 (0:00:00.907) 0:00:24.764 ********* 2025-08-29 21:06:25.454218 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 21:06:25.454226 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 21:06:25.454234 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 21:06:25.454242 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 21:06:25.454250 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 21:06:25.454258 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 21:06:25.454273 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 21:06:25.454281 | orchestrator | 2025-08-29 21:06:25.454289 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-08-29 21:06:25.454297 | orchestrator | Friday 29 August 2025 21:04:41 +0000 (0:00:01.861) 0:00:26.625 ********* 2025-08-29 21:06:25.454305 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:06:25.454313 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:06:25.454321 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-08-29 21:06:25.454329 | orchestrator | 2025-08-29 21:06:25.454337 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-08-29 21:06:25.454345 | orchestrator | Friday 29 August 2025 21:04:41 +0000 (0:00:00.367) 0:00:26.993 ********* 2025-08-29 21:06:25.454355 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 21:06:25.454365 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 21:06:25.454373 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 21:06:25.454381 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 21:06:25.454469 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 21:06:25.454480 | orchestrator | 2025-08-29 21:06:25.454493 | orchestrator | TASK [generate keys] *********************************************************** 2025-08-29 21:06:25.454501 | orchestrator | Friday 29 August 2025 21:05:27 +0000 (0:00:46.430) 0:01:13.423 ********* 2025-08-29 21:06:25.454509 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454517 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454529 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454551 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454564 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454576 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454589 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-08-29 21:06:25.454602 | orchestrator | 2025-08-29 21:06:25.454614 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-08-29 21:06:25.454627 | orchestrator | Friday 29 August 2025 21:05:53 +0000 (0:00:25.537) 0:01:38.960 ********* 2025-08-29 21:06:25.454640 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454652 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454665 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454679 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454694 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454708 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454722 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 21:06:25.454738 | orchestrator | 2025-08-29 21:06:25.454752 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-08-29 21:06:25.454765 | orchestrator | Friday 29 August 2025 21:06:05 +0000 (0:00:12.356) 0:01:51.316 ********* 2025-08-29 21:06:25.454778 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454792 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 21:06:25.454805 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 21:06:25.454818 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454831 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 21:06:25.454847 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 21:06:25.454855 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454863 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 21:06:25.454871 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 21:06:25.454879 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454887 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 21:06:25.454894 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 21:06:25.454902 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454910 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 21:06:25.454918 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 21:06:25.454926 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 21:06:25.454934 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 21:06:25.454942 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 21:06:25.454950 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-08-29 21:06:25.454957 | orchestrator | 2025-08-29 21:06:25.454965 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:06:25.454973 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-08-29 21:06:25.454992 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 21:06:25.455049 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 21:06:25.455058 | orchestrator | 2025-08-29 21:06:25.455066 | orchestrator | 2025-08-29 21:06:25.455074 | orchestrator | 2025-08-29 21:06:25.455082 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:06:25.455090 | orchestrator | Friday 29 August 2025 21:06:24 +0000 (0:00:18.415) 0:02:09.732 ********* 2025-08-29 21:06:25.455098 | orchestrator | =============================================================================== 2025-08-29 21:06:25.455106 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.43s 2025-08-29 21:06:25.455119 | orchestrator | generate keys ---------------------------------------------------------- 25.54s 2025-08-29 21:06:25.455127 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.42s 2025-08-29 21:06:25.455135 | orchestrator | get keys from monitors ------------------------------------------------- 12.36s 2025-08-29 21:06:25.455143 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.93s 2025-08-29 21:06:25.455151 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.86s 2025-08-29 21:06:25.455159 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.83s 2025-08-29 21:06:25.455167 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.91s 2025-08-29 21:06:25.455175 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2025-08-29 21:06:25.455183 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.73s 2025-08-29 21:06:25.455191 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.71s 2025-08-29 21:06:25.455198 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.70s 2025-08-29 21:06:25.455206 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.68s 2025-08-29 21:06:25.455214 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.65s 2025-08-29 21:06:25.455222 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2025-08-29 21:06:25.455230 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2025-08-29 21:06:25.455238 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.62s 2025-08-29 21:06:25.455246 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.58s 2025-08-29 21:06:25.455253 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.55s 2025-08-29 21:06:25.455261 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.51s 2025-08-29 21:06:28.491882 | orchestrator | 2025-08-29 21:06:28 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:28.493897 | orchestrator | 2025-08-29 21:06:28 | INFO  | Task b4b0ca23-74ac-419f-a0ac-431be78d829f is in state STARTED 2025-08-29 21:06:28.494805 | orchestrator | 2025-08-29 21:06:28 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:28.494861 | orchestrator | 2025-08-29 21:06:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:31.533455 | orchestrator | 2025-08-29 21:06:31 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:31.533538 | orchestrator | 2025-08-29 21:06:31 | INFO  | Task b4b0ca23-74ac-419f-a0ac-431be78d829f is in state STARTED 2025-08-29 21:06:31.533567 | orchestrator | 2025-08-29 21:06:31 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:31.533594 | orchestrator | 2025-08-29 21:06:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:34.569133 | orchestrator | 2025-08-29 21:06:34 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:34.570723 | orchestrator | 2025-08-29 21:06:34 | INFO  | Task b4b0ca23-74ac-419f-a0ac-431be78d829f is in state STARTED 2025-08-29 21:06:34.571867 | orchestrator | 2025-08-29 21:06:34 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:34.572090 | orchestrator | 2025-08-29 21:06:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:37.615929 | orchestrator | 2025-08-29 21:06:37 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state STARTED 2025-08-29 21:06:37.618285 | orchestrator | 2025-08-29 21:06:37 | INFO  | Task b4b0ca23-74ac-419f-a0ac-431be78d829f is in state STARTED 2025-08-29 21:06:37.620724 | orchestrator | 2025-08-29 21:06:37 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:37.620781 | orchestrator | 2025-08-29 21:06:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:40.666384 | orchestrator | 2025-08-29 21:06:40 | INFO  | Task e8b819ff-5c55-4181-a6ac-94b0ce58791d is in state SUCCESS 2025-08-29 21:06:40.667867 | orchestrator | 2025-08-29 21:06:40.667915 | orchestrator | 2025-08-29 21:06:40.667928 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:06:40.667940 | orchestrator | 2025-08-29 21:06:40.667952 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:06:40.667963 | orchestrator | Friday 29 August 2025 21:04:50 +0000 (0:00:00.243) 0:00:00.243 ********* 2025-08-29 21:06:40.667975 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.667988 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.668042 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.668057 | orchestrator | 2025-08-29 21:06:40.668069 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:06:40.668280 | orchestrator | Friday 29 August 2025 21:04:50 +0000 (0:00:00.253) 0:00:00.497 ********* 2025-08-29 21:06:40.668296 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-08-29 21:06:40.668308 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-08-29 21:06:40.668341 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-08-29 21:06:40.668362 | orchestrator | 2025-08-29 21:06:40.668380 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-08-29 21:06:40.668399 | orchestrator | 2025-08-29 21:06:40.668420 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 21:06:40.668440 | orchestrator | Friday 29 August 2025 21:04:50 +0000 (0:00:00.337) 0:00:00.835 ********* 2025-08-29 21:06:40.668457 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:06:40.668477 | orchestrator | 2025-08-29 21:06:40.668489 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-08-29 21:06:40.668499 | orchestrator | Friday 29 August 2025 21:04:51 +0000 (0:00:00.445) 0:00:01.280 ********* 2025-08-29 21:06:40.668517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:06:40.668587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:06:40.668603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:06:40.668628 | orchestrator | 2025-08-29 21:06:40.668640 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-08-29 21:06:40.668651 | orchestrator | Friday 29 August 2025 21:04:52 +0000 (0:00:00.998) 0:00:02.278 ********* 2025-08-29 21:06:40.668662 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.668672 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.668683 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.668694 | orchestrator | 2025-08-29 21:06:40.668705 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 21:06:40.668716 | orchestrator | Friday 29 August 2025 21:04:52 +0000 (0:00:00.361) 0:00:02.640 ********* 2025-08-29 21:06:40.668735 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 21:06:40.668747 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 21:06:40.668758 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 21:06:40.668769 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 21:06:40.668779 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 21:06:40.668790 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 21:06:40.668801 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-08-29 21:06:40.668817 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 21:06:40.668828 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 21:06:40.668839 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 21:06:40.668849 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 21:06:40.668860 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 21:06:40.668987 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 21:06:40.669026 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 21:06:40.669045 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-08-29 21:06:40.669068 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 21:06:40.669081 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 21:06:40.669094 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 21:06:40.669107 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 21:06:40.669119 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 21:06:40.669131 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 21:06:40.669144 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 21:06:40.669156 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-08-29 21:06:40.669169 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 21:06:40.669181 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-08-29 21:06:40.669194 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-08-29 21:06:40.669205 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-08-29 21:06:40.669216 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-08-29 21:06:40.669226 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-08-29 21:06:40.669237 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-08-29 21:06:40.669248 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-08-29 21:06:40.669259 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-08-29 21:06:40.669270 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-08-29 21:06:40.669281 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-08-29 21:06:40.669292 | orchestrator | 2025-08-29 21:06:40.669303 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 21:06:40.669314 | orchestrator | Friday 29 August 2025 21:04:53 +0000 (0:00:00.621) 0:00:03.261 ********* 2025-08-29 21:06:40.669325 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.669336 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.669346 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.669357 | orchestrator | 2025-08-29 21:06:40.669368 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 21:06:40.669379 | orchestrator | Friday 29 August 2025 21:04:53 +0000 (0:00:00.252) 0:00:03.514 ********* 2025-08-29 21:06:40.669406 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.669425 | orchestrator | 2025-08-29 21:06:40.669445 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 21:06:40.669466 | orchestrator | Friday 29 August 2025 21:04:53 +0000 (0:00:00.107) 0:00:03.621 ********* 2025-08-29 21:06:40.669483 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.669511 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.669523 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.669533 | orchestrator | 2025-08-29 21:06:40.669544 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 21:06:40.669555 | orchestrator | Friday 29 August 2025 21:04:53 +0000 (0:00:00.363) 0:00:03.985 ********* 2025-08-29 21:06:40.669565 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.669576 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.669595 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.669612 | orchestrator | 2025-08-29 21:06:40.669638 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 21:06:40.669656 | orchestrator | Friday 29 August 2025 21:04:54 +0000 (0:00:00.328) 0:00:04.313 ********* 2025-08-29 21:06:40.669673 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.669690 | orchestrator | 2025-08-29 21:06:40.669707 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 21:06:40.669725 | orchestrator | Friday 29 August 2025 21:04:54 +0000 (0:00:00.135) 0:00:04.449 ********* 2025-08-29 21:06:40.669743 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.669761 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.669779 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.669790 | orchestrator | 2025-08-29 21:06:40.669801 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 21:06:40.669811 | orchestrator | Friday 29 August 2025 21:04:54 +0000 (0:00:00.367) 0:00:04.816 ********* 2025-08-29 21:06:40.669822 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.669833 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.669844 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.669854 | orchestrator | 2025-08-29 21:06:40.669865 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 21:06:40.669876 | orchestrator | Friday 29 August 2025 21:04:55 +0000 (0:00:00.306) 0:00:05.122 ********* 2025-08-29 21:06:40.669886 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.669897 | orchestrator | 2025-08-29 21:06:40.669908 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 21:06:40.669918 | orchestrator | Friday 29 August 2025 21:04:55 +0000 (0:00:00.375) 0:00:05.498 ********* 2025-08-29 21:06:40.669929 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.669939 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.669950 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.669960 | orchestrator | 2025-08-29 21:06:40.669971 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 21:06:40.669982 | orchestrator | Friday 29 August 2025 21:04:55 +0000 (0:00:00.321) 0:00:05.819 ********* 2025-08-29 21:06:40.669992 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.670074 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.670087 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.670099 | orchestrator | 2025-08-29 21:06:40.670110 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 21:06:40.670121 | orchestrator | Friday 29 August 2025 21:04:56 +0000 (0:00:00.359) 0:00:06.179 ********* 2025-08-29 21:06:40.670131 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.670142 | orchestrator | 2025-08-29 21:06:40.670153 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 21:06:40.670164 | orchestrator | Friday 29 August 2025 21:04:56 +0000 (0:00:00.120) 0:00:06.299 ********* 2025-08-29 21:06:40.670175 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.670186 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.670197 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.670208 | orchestrator | 2025-08-29 21:06:40.670218 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 21:06:40.670229 | orchestrator | Friday 29 August 2025 21:04:56 +0000 (0:00:00.319) 0:00:06.618 ********* 2025-08-29 21:06:40.670240 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.670251 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.670271 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.670282 | orchestrator | 2025-08-29 21:06:40.670293 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 21:06:40.670304 | orchestrator | Friday 29 August 2025 21:04:57 +0000 (0:00:00.545) 0:00:07.164 ********* 2025-08-29 21:06:40.670315 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.670326 | orchestrator | 2025-08-29 21:06:40.670337 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 21:06:40.670348 | orchestrator | Friday 29 August 2025 21:04:57 +0000 (0:00:00.144) 0:00:07.309 ********* 2025-08-29 21:06:40.670359 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.670370 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.670381 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.670391 | orchestrator | 2025-08-29 21:06:40.670402 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 21:06:40.670413 | orchestrator | Friday 29 August 2025 21:04:57 +0000 (0:00:00.294) 0:00:07.604 ********* 2025-08-29 21:06:40.670427 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.670446 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.670465 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.670485 | orchestrator | 2025-08-29 21:06:40.670505 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 21:06:40.670526 | orchestrator | Friday 29 August 2025 21:04:57 +0000 (0:00:00.329) 0:00:07.933 ********* 2025-08-29 21:06:40.670545 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.670561 | orchestrator | 2025-08-29 21:06:40.670572 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 21:06:40.670582 | orchestrator | Friday 29 August 2025 21:04:58 +0000 (0:00:00.134) 0:00:08.068 ********* 2025-08-29 21:06:40.670593 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.670604 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.670615 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.670626 | orchestrator | 2025-08-29 21:06:40.670637 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 21:06:40.670659 | orchestrator | Friday 29 August 2025 21:04:58 +0000 (0:00:00.446) 0:00:08.514 ********* 2025-08-29 21:06:40.670671 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.670682 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.670693 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.670704 | orchestrator | 2025-08-29 21:06:40.670714 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 21:06:40.670725 | orchestrator | Friday 29 August 2025 21:04:58 +0000 (0:00:00.293) 0:00:08.808 ********* 2025-08-29 21:06:40.670736 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.670747 | orchestrator | 2025-08-29 21:06:40.670758 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 21:06:40.670776 | orchestrator | Friday 29 August 2025 21:04:58 +0000 (0:00:00.128) 0:00:08.936 ********* 2025-08-29 21:06:40.670794 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.670812 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.670830 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.670847 | orchestrator | 2025-08-29 21:06:40.670872 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 21:06:40.670890 | orchestrator | Friday 29 August 2025 21:04:59 +0000 (0:00:00.279) 0:00:09.216 ********* 2025-08-29 21:06:40.670909 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.670927 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.670946 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.670957 | orchestrator | 2025-08-29 21:06:40.670967 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 21:06:40.670978 | orchestrator | Friday 29 August 2025 21:04:59 +0000 (0:00:00.323) 0:00:09.539 ********* 2025-08-29 21:06:40.670989 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.671037 | orchestrator | 2025-08-29 21:06:40.671051 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 21:06:40.671071 | orchestrator | Friday 29 August 2025 21:04:59 +0000 (0:00:00.129) 0:00:09.669 ********* 2025-08-29 21:06:40.671082 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.671093 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.671104 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.671115 | orchestrator | 2025-08-29 21:06:40.671126 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 21:06:40.671136 | orchestrator | Friday 29 August 2025 21:05:00 +0000 (0:00:00.510) 0:00:10.179 ********* 2025-08-29 21:06:40.671147 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.671158 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.671168 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.671179 | orchestrator | 2025-08-29 21:06:40.671190 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 21:06:40.671201 | orchestrator | Friday 29 August 2025 21:05:00 +0000 (0:00:00.312) 0:00:10.492 ********* 2025-08-29 21:06:40.671211 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.671223 | orchestrator | 2025-08-29 21:06:40.671233 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 21:06:40.671244 | orchestrator | Friday 29 August 2025 21:05:00 +0000 (0:00:00.122) 0:00:10.615 ********* 2025-08-29 21:06:40.671255 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.671266 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.671277 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.671287 | orchestrator | 2025-08-29 21:06:40.671298 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 21:06:40.671309 | orchestrator | Friday 29 August 2025 21:05:00 +0000 (0:00:00.280) 0:00:10.895 ********* 2025-08-29 21:06:40.671478 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:06:40.671501 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:06:40.671520 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:06:40.671538 | orchestrator | 2025-08-29 21:06:40.671556 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 21:06:40.671567 | orchestrator | Friday 29 August 2025 21:05:01 +0000 (0:00:00.473) 0:00:11.369 ********* 2025-08-29 21:06:40.671665 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.671681 | orchestrator | 2025-08-29 21:06:40.671692 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 21:06:40.671703 | orchestrator | Friday 29 August 2025 21:05:01 +0000 (0:00:00.129) 0:00:11.498 ********* 2025-08-29 21:06:40.671714 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.671725 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.671736 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.671746 | orchestrator | 2025-08-29 21:06:40.671757 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-08-29 21:06:40.671768 | orchestrator | Friday 29 August 2025 21:05:01 +0000 (0:00:00.301) 0:00:11.800 ********* 2025-08-29 21:06:40.671779 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:06:40.671790 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:06:40.671801 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:06:40.671811 | orchestrator | 2025-08-29 21:06:40.671822 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-08-29 21:06:40.671833 | orchestrator | Friday 29 August 2025 21:05:03 +0000 (0:00:01.761) 0:00:13.562 ********* 2025-08-29 21:06:40.671844 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 21:06:40.671855 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 21:06:40.671866 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 21:06:40.671877 | orchestrator | 2025-08-29 21:06:40.671888 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-08-29 21:06:40.671899 | orchestrator | Friday 29 August 2025 21:05:05 +0000 (0:00:01.918) 0:00:15.480 ********* 2025-08-29 21:06:40.671921 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 21:06:40.671932 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 21:06:40.671943 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 21:06:40.671954 | orchestrator | 2025-08-29 21:06:40.671974 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-08-29 21:06:40.671988 | orchestrator | Friday 29 August 2025 21:05:07 +0000 (0:00:02.295) 0:00:17.776 ********* 2025-08-29 21:06:40.672036 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 21:06:40.672055 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 21:06:40.672072 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 21:06:40.672090 | orchestrator | 2025-08-29 21:06:40.672109 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-08-29 21:06:40.672128 | orchestrator | Friday 29 August 2025 21:05:09 +0000 (0:00:01.529) 0:00:19.306 ********* 2025-08-29 21:06:40.672155 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.672174 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.672193 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.672211 | orchestrator | 2025-08-29 21:06:40.672230 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-08-29 21:06:40.672250 | orchestrator | Friday 29 August 2025 21:05:09 +0000 (0:00:00.249) 0:00:19.555 ********* 2025-08-29 21:06:40.672267 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.672286 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.672304 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.672323 | orchestrator | 2025-08-29 21:06:40.672342 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 21:06:40.672362 | orchestrator | Friday 29 August 2025 21:05:09 +0000 (0:00:00.245) 0:00:19.801 ********* 2025-08-29 21:06:40.672381 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:06:40.672402 | orchestrator | 2025-08-29 21:06:40.672422 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-08-29 21:06:40.672440 | orchestrator | Friday 29 August 2025 21:05:10 +0000 (0:00:00.617) 0:00:20.418 ********* 2025-08-29 21:06:40.672464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:06:40.672540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:06:40.672565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:06:40.672595 | orchestrator | 2025-08-29 21:06:40.672613 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-08-29 21:06:40.672632 | orchestrator | Friday 29 August 2025 21:05:11 +0000 (0:00:01.500) 0:00:21.918 ********* 2025-08-29 21:06:40.672679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 21:06:40.672705 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.672731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 21:06:40.672759 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.672785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 21:06:40.672804 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.672822 | orchestrator | 2025-08-29 21:06:40.672839 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-08-29 21:06:40.672866 | orchestrator | Friday 29 August 2025 21:05:12 +0000 (0:00:00.547) 0:00:22.466 ********* 2025-08-29 21:06:40.672904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 21:06:40.672924 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.672943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 21:06:40.672976 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.673042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 21:06:40.673066 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.673085 | orchestrator | 2025-08-29 21:06:40.673103 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-08-29 21:06:40.673122 | orchestrator | Friday 29 August 2025 21:05:13 +0000 (0:00:00.906) 0:00:23.373 ********* 2025-08-29 21:06:40.673141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:06:40.673198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:06:40.673221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 21:06:40.673250 | orchestrator | 2025-08-29 21:06:40.673261 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 21:06:40.673272 | orchestrator | Friday 29 August 2025 21:05:14 +0000 (0:00:01.216) 0:00:24.590 ********* 2025-08-29 21:06:40.673283 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:06:40.673294 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:06:40.673305 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:06:40.673316 | orchestrator | 2025-08-29 21:06:40.673327 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 21:06:40.673345 | orchestrator | Friday 29 August 2025 21:05:14 +0000 (0:00:00.227) 0:00:24.817 ********* 2025-08-29 21:06:40.673357 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:06:40.673368 | orchestrator | 2025-08-29 21:06:40.673379 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-08-29 21:06:40.673389 | orchestrator | Friday 29 August 2025 21:05:15 +0000 (0:00:00.557) 0:00:25.374 ********* 2025-08-29 21:06:40.673400 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:06:40.673411 | orchestrator | 2025-08-29 21:06:40.673422 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-08-29 21:06:40.673433 | orchestrator | Friday 29 August 2025 21:05:17 +0000 (0:00:02.205) 0:00:27.580 ********* 2025-08-29 21:06:40.673443 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:06:40.673454 | orchestrator | 2025-08-29 21:06:40.673465 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-08-29 21:06:40.673481 | orchestrator | Friday 29 August 2025 21:05:19 +0000 (0:00:02.232) 0:00:29.813 ********* 2025-08-29 21:06:40.673492 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:06:40.673503 | orchestrator | 2025-08-29 21:06:40.673514 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 21:06:40.673525 | orchestrator | Friday 29 August 2025 21:05:35 +0000 (0:00:15.997) 0:00:45.811 ********* 2025-08-29 21:06:40.673535 | orchestrator | 2025-08-29 21:06:40.673546 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 21:06:40.673557 | orchestrator | Friday 29 August 2025 21:05:35 +0000 (0:00:00.080) 0:00:45.891 ********* 2025-08-29 21:06:40.673568 | orchestrator | 2025-08-29 21:06:40.673579 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 21:06:40.673589 | orchestrator | Friday 29 August 2025 21:05:35 +0000 (0:00:00.065) 0:00:45.957 ********* 2025-08-29 21:06:40.673600 | orchestrator | 2025-08-29 21:06:40.673618 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-08-29 21:06:40.673629 | orchestrator | Friday 29 August 2025 21:05:36 +0000 (0:00:00.087) 0:00:46.044 ********* 2025-08-29 21:06:40.673639 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:06:40.673650 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:06:40.673661 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:06:40.673671 | orchestrator | 2025-08-29 21:06:40.673682 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:06:40.673694 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-08-29 21:06:40.673705 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 21:06:40.673716 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 21:06:40.673727 | orchestrator | 2025-08-29 21:06:40.673738 | orchestrator | 2025-08-29 21:06:40.673749 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:06:40.673759 | orchestrator | Friday 29 August 2025 21:06:37 +0000 (0:01:01.670) 0:01:47.715 ********* 2025-08-29 21:06:40.673770 | orchestrator | =============================================================================== 2025-08-29 21:06:40.673781 | orchestrator | horizon : Restart horizon container ------------------------------------ 61.67s 2025-08-29 21:06:40.673792 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.00s 2025-08-29 21:06:40.673802 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.30s 2025-08-29 21:06:40.673813 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.23s 2025-08-29 21:06:40.673823 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.21s 2025-08-29 21:06:40.673834 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.92s 2025-08-29 21:06:40.673845 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.76s 2025-08-29 21:06:40.673855 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.53s 2025-08-29 21:06:40.673866 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.50s 2025-08-29 21:06:40.673877 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.22s 2025-08-29 21:06:40.673887 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.00s 2025-08-29 21:06:40.673898 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.91s 2025-08-29 21:06:40.673909 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2025-08-29 21:06:40.673919 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2025-08-29 21:06:40.673930 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2025-08-29 21:06:40.673941 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.55s 2025-08-29 21:06:40.673951 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2025-08-29 21:06:40.673962 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-08-29 21:06:40.673973 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2025-08-29 21:06:40.673983 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2025-08-29 21:06:40.673994 | orchestrator | 2025-08-29 21:06:40 | INFO  | Task b4b0ca23-74ac-419f-a0ac-431be78d829f is in state STARTED 2025-08-29 21:06:40.674071 | orchestrator | 2025-08-29 21:06:40 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:40.674083 | orchestrator | 2025-08-29 21:06:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:43.715566 | orchestrator | 2025-08-29 21:06:43 | INFO  | Task b4b0ca23-74ac-419f-a0ac-431be78d829f is in state STARTED 2025-08-29 21:06:43.718780 | orchestrator | 2025-08-29 21:06:43 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:43.719347 | orchestrator | 2025-08-29 21:06:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:46.777208 | orchestrator | 2025-08-29 21:06:46 | INFO  | Task b4b0ca23-74ac-419f-a0ac-431be78d829f is in state STARTED 2025-08-29 21:06:46.779254 | orchestrator | 2025-08-29 21:06:46 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:46.779354 | orchestrator | 2025-08-29 21:06:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:49.820641 | orchestrator | 2025-08-29 21:06:49 | INFO  | Task b4b0ca23-74ac-419f-a0ac-431be78d829f is in state STARTED 2025-08-29 21:06:49.820774 | orchestrator | 2025-08-29 21:06:49 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:49.820819 | orchestrator | 2025-08-29 21:06:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:52.884319 | orchestrator | 2025-08-29 21:06:52 | INFO  | Task b4b0ca23-74ac-419f-a0ac-431be78d829f is in state STARTED 2025-08-29 21:06:52.886440 | orchestrator | 2025-08-29 21:06:52 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:52.886789 | orchestrator | 2025-08-29 21:06:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:55.933202 | orchestrator | 2025-08-29 21:06:55 | INFO  | Task b4b0ca23-74ac-419f-a0ac-431be78d829f is in state SUCCESS 2025-08-29 21:06:55.935116 | orchestrator | 2025-08-29 21:06:55 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:06:55.936286 | orchestrator | 2025-08-29 21:06:55 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:55.936328 | orchestrator | 2025-08-29 21:06:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:06:58.978815 | orchestrator | 2025-08-29 21:06:58 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:06:58.980967 | orchestrator | 2025-08-29 21:06:58 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:06:58.981183 | orchestrator | 2025-08-29 21:06:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:02.026654 | orchestrator | 2025-08-29 21:07:02 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:02.027422 | orchestrator | 2025-08-29 21:07:02 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:02.027449 | orchestrator | 2025-08-29 21:07:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:05.071389 | orchestrator | 2025-08-29 21:07:05 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:05.072278 | orchestrator | 2025-08-29 21:07:05 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:05.072322 | orchestrator | 2025-08-29 21:07:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:08.109299 | orchestrator | 2025-08-29 21:07:08 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:08.112251 | orchestrator | 2025-08-29 21:07:08 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:08.112592 | orchestrator | 2025-08-29 21:07:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:11.152231 | orchestrator | 2025-08-29 21:07:11 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:11.152364 | orchestrator | 2025-08-29 21:07:11 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:11.152380 | orchestrator | 2025-08-29 21:07:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:14.186999 | orchestrator | 2025-08-29 21:07:14 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:14.187760 | orchestrator | 2025-08-29 21:07:14 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:14.187849 | orchestrator | 2025-08-29 21:07:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:17.228697 | orchestrator | 2025-08-29 21:07:17 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:17.229548 | orchestrator | 2025-08-29 21:07:17 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:17.229750 | orchestrator | 2025-08-29 21:07:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:20.277112 | orchestrator | 2025-08-29 21:07:20 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:20.277468 | orchestrator | 2025-08-29 21:07:20 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:20.277497 | orchestrator | 2025-08-29 21:07:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:23.319174 | orchestrator | 2025-08-29 21:07:23 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:23.319697 | orchestrator | 2025-08-29 21:07:23 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:23.319732 | orchestrator | 2025-08-29 21:07:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:26.362518 | orchestrator | 2025-08-29 21:07:26 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:26.364122 | orchestrator | 2025-08-29 21:07:26 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:26.364161 | orchestrator | 2025-08-29 21:07:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:29.413732 | orchestrator | 2025-08-29 21:07:29 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:29.415650 | orchestrator | 2025-08-29 21:07:29 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:29.415699 | orchestrator | 2025-08-29 21:07:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:32.459049 | orchestrator | 2025-08-29 21:07:32 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:32.461928 | orchestrator | 2025-08-29 21:07:32 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:32.461960 | orchestrator | 2025-08-29 21:07:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:35.504155 | orchestrator | 2025-08-29 21:07:35 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:35.505242 | orchestrator | 2025-08-29 21:07:35 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state STARTED 2025-08-29 21:07:35.505281 | orchestrator | 2025-08-29 21:07:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:38.542470 | orchestrator | 2025-08-29 21:07:38 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:07:38.543067 | orchestrator | 2025-08-29 21:07:38 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:07:38.543909 | orchestrator | 2025-08-29 21:07:38 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:07:38.544608 | orchestrator | 2025-08-29 21:07:38 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:38.547355 | orchestrator | 2025-08-29 21:07:38 | INFO  | Task 5285d6f9-1431-41f6-a609-ca0886e6cc79 is in state SUCCESS 2025-08-29 21:07:38.548826 | orchestrator | 2025-08-29 21:07:38.548858 | orchestrator | 2025-08-29 21:07:38.548869 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-08-29 21:07:38.548881 | orchestrator | 2025-08-29 21:07:38.548893 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-08-29 21:07:38.549399 | orchestrator | Friday 29 August 2025 21:06:28 +0000 (0:00:00.164) 0:00:00.165 ********* 2025-08-29 21:07:38.549420 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-08-29 21:07:38.549433 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 21:07:38.549444 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 21:07:38.549455 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 21:07:38.549466 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 21:07:38.549477 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-08-29 21:07:38.549487 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-08-29 21:07:38.549498 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-08-29 21:07:38.549509 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-08-29 21:07:38.549520 | orchestrator | 2025-08-29 21:07:38.549531 | orchestrator | TASK [Create share directory] ************************************************** 2025-08-29 21:07:38.549542 | orchestrator | Friday 29 August 2025 21:06:32 +0000 (0:00:04.173) 0:00:04.338 ********* 2025-08-29 21:07:38.549553 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 21:07:38.549564 | orchestrator | 2025-08-29 21:07:38.549575 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-08-29 21:07:38.549586 | orchestrator | Friday 29 August 2025 21:06:33 +0000 (0:00:00.994) 0:00:05.333 ********* 2025-08-29 21:07:38.549597 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-08-29 21:07:38.549608 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 21:07:38.549618 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 21:07:38.549629 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 21:07:38.549649 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 21:07:38.549660 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-08-29 21:07:38.549670 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-08-29 21:07:38.549682 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-08-29 21:07:38.549693 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-08-29 21:07:38.549703 | orchestrator | 2025-08-29 21:07:38.549714 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-08-29 21:07:38.549725 | orchestrator | Friday 29 August 2025 21:06:47 +0000 (0:00:13.164) 0:00:18.497 ********* 2025-08-29 21:07:38.549736 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-08-29 21:07:38.549747 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 21:07:38.549758 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 21:07:38.549782 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 21:07:38.549793 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 21:07:38.549804 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-08-29 21:07:38.549815 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-08-29 21:07:38.549825 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-08-29 21:07:38.549836 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-08-29 21:07:38.549846 | orchestrator | 2025-08-29 21:07:38.549857 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:07:38.549868 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:07:38.549880 | orchestrator | 2025-08-29 21:07:38.549891 | orchestrator | 2025-08-29 21:07:38.549902 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:07:38.549913 | orchestrator | Friday 29 August 2025 21:06:53 +0000 (0:00:06.623) 0:00:25.121 ********* 2025-08-29 21:07:38.549923 | orchestrator | =============================================================================== 2025-08-29 21:07:38.549934 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.16s 2025-08-29 21:07:38.549944 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.62s 2025-08-29 21:07:38.549955 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.17s 2025-08-29 21:07:38.549966 | orchestrator | Create share directory -------------------------------------------------- 0.99s 2025-08-29 21:07:38.549977 | orchestrator | 2025-08-29 21:07:38.549987 | orchestrator | 2025-08-29 21:07:38.550061 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:07:38.550073 | orchestrator | 2025-08-29 21:07:38.550129 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:07:38.550142 | orchestrator | Friday 29 August 2025 21:04:50 +0000 (0:00:00.211) 0:00:00.211 ********* 2025-08-29 21:07:38.550153 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:07:38.550165 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:07:38.550176 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:07:38.550187 | orchestrator | 2025-08-29 21:07:38.550198 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:07:38.550209 | orchestrator | Friday 29 August 2025 21:04:50 +0000 (0:00:00.216) 0:00:00.428 ********* 2025-08-29 21:07:38.550220 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 21:07:38.550231 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 21:07:38.550242 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 21:07:38.550253 | orchestrator | 2025-08-29 21:07:38.550264 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-08-29 21:07:38.550275 | orchestrator | 2025-08-29 21:07:38.550286 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 21:07:38.550296 | orchestrator | Friday 29 August 2025 21:04:50 +0000 (0:00:00.316) 0:00:00.744 ********* 2025-08-29 21:07:38.550308 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:07:38.550319 | orchestrator | 2025-08-29 21:07:38.550329 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-08-29 21:07:38.550340 | orchestrator | Friday 29 August 2025 21:04:51 +0000 (0:00:00.480) 0:00:01.224 ********* 2025-08-29 21:07:38.550364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.550389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.550437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.550453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550537 | orchestrator | 2025-08-29 21:07:38.550548 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-08-29 21:07:38.550559 | orchestrator | Friday 29 August 2025 21:04:52 +0000 (0:00:01.662) 0:00:02.887 ********* 2025-08-29 21:07:38.550576 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-08-29 21:07:38.550587 | orchestrator | 2025-08-29 21:07:38.550598 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-08-29 21:07:38.550610 | orchestrator | Friday 29 August 2025 21:04:53 +0000 (0:00:00.731) 0:00:03.619 ********* 2025-08-29 21:07:38.550621 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:07:38.550632 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:07:38.550643 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:07:38.550654 | orchestrator | 2025-08-29 21:07:38.550665 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-08-29 21:07:38.550676 | orchestrator | Friday 29 August 2025 21:04:53 +0000 (0:00:00.368) 0:00:03.988 ********* 2025-08-29 21:07:38.550687 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 21:07:38.550698 | orchestrator | 2025-08-29 21:07:38.550709 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 21:07:38.550720 | orchestrator | Friday 29 August 2025 21:04:54 +0000 (0:00:00.658) 0:00:04.646 ********* 2025-08-29 21:07:38.550731 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:07:38.550748 | orchestrator | 2025-08-29 21:07:38.550759 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-08-29 21:07:38.550770 | orchestrator | Friday 29 August 2025 21:04:55 +0000 (0:00:00.581) 0:00:05.228 ********* 2025-08-29 21:07:38.550787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.550800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.550813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.550834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.550918 | orchestrator | 2025-08-29 21:07:38.550929 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-08-29 21:07:38.550940 | orchestrator | Friday 29 August 2025 21:04:58 +0000 (0:00:03.715) 0:00:08.943 ********* 2025-08-29 21:07:38.550959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 21:07:38.550978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.551040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:07:38.551053 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.551066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 21:07:38.551078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.551097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:07:38.551123 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:07:38.551135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 21:07:38.551152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.551164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:07:38.551175 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:07:38.551186 | orchestrator | 2025-08-29 21:07:38.551197 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-08-29 21:07:38.551208 | orchestrator | Friday 29 August 2025 21:04:59 +0000 (0:00:00.528) 0:00:09.472 ********* 2025-08-29 21:07:38.551220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 21:07:38.551246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.551258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:07:38.551269 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.551286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 21:07:38.551298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.551309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:07:38.551320 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:07:38.551339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 21:07:38.551358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.551370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 21:07:38.551381 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:07:38.551392 | orchestrator | 2025-08-29 21:07:38.551403 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-08-29 21:07:38.551414 | orchestrator | Friday 29 August 2025 21:05:00 +0000 (0:00:00.764) 0:00:10.236 ********* 2025-08-29 21:07:38.551430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.551443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.551469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.551482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.551498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.551510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.551521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.551539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.551558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.551570 | orchestrator | 2025-08-29 21:07:38.551581 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-08-29 21:07:38.551592 | orchestrator | Friday 29 August 2025 21:05:03 +0000 (0:00:03.717) 0:00:13.953 ********* 2025-08-29 21:07:38.551604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.551620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.551633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.551651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.551670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.551682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.551698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.551710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.551721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.551739 | orchestrator | 2025-08-29 21:07:38.551750 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-08-29 21:07:38.551761 | orchestrator | Friday 29 August 2025 21:05:09 +0000 (0:00:05.291) 0:00:19.244 ********* 2025-08-29 21:07:38.551772 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:07:38.551783 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:07:38.551794 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:07:38.551805 | orchestrator | 2025-08-29 21:07:38.551815 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-08-29 21:07:38.551826 | orchestrator | Friday 29 August 2025 21:05:10 +0000 (0:00:01.312) 0:00:20.556 ********* 2025-08-29 21:07:38.551837 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.551848 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:07:38.551859 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:07:38.551869 | orchestrator | 2025-08-29 21:07:38.551880 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-08-29 21:07:38.551896 | orchestrator | Friday 29 August 2025 21:05:11 +0000 (0:00:00.571) 0:00:21.128 ********* 2025-08-29 21:07:38.551907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.551919 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:07:38.551930 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:07:38.551940 | orchestrator | 2025-08-29 21:07:38.551951 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-08-29 21:07:38.551962 | orchestrator | Friday 29 August 2025 21:05:11 +0000 (0:00:00.239) 0:00:21.368 ********* 2025-08-29 21:07:38.551973 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.551984 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:07:38.552047 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:07:38.552059 | orchestrator | 2025-08-29 21:07:38.552070 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-08-29 21:07:38.552081 | orchestrator | Friday 29 August 2025 21:05:11 +0000 (0:00:00.360) 0:00:21.728 ********* 2025-08-29 21:07:38.552093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.552108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.552126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.552137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.552154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.552166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 21:07:38.552183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.552201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.552211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.552221 | orchestrator | 2025-08-29 21:07:38.552231 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 21:07:38.552241 | orchestrator | Friday 29 August 2025 21:05:14 +0000 (0:00:02.377) 0:00:24.106 ********* 2025-08-29 21:07:38.552250 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.552260 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:07:38.552270 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:07:38.552279 | orchestrator | 2025-08-29 21:07:38.552289 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-08-29 21:07:38.552298 | orchestrator | Friday 29 August 2025 21:05:14 +0000 (0:00:00.243) 0:00:24.349 ********* 2025-08-29 21:07:38.552308 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 21:07:38.552318 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 21:07:38.552328 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 21:07:38.552337 | orchestrator | 2025-08-29 21:07:38.552352 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-08-29 21:07:38.552361 | orchestrator | Friday 29 August 2025 21:05:15 +0000 (0:00:01.606) 0:00:25.955 ********* 2025-08-29 21:07:38.552371 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 21:07:38.552381 | orchestrator | 2025-08-29 21:07:38.552390 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-08-29 21:07:38.552400 | orchestrator | Friday 29 August 2025 21:05:17 +0000 (0:00:01.179) 0:00:27.134 ********* 2025-08-29 21:07:38.552409 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.552419 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:07:38.552428 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:07:38.552438 | orchestrator | 2025-08-29 21:07:38.552448 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-08-29 21:07:38.552457 | orchestrator | Friday 29 August 2025 21:05:17 +0000 (0:00:00.524) 0:00:27.658 ********* 2025-08-29 21:07:38.552467 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 21:07:38.552476 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 21:07:38.552486 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 21:07:38.552495 | orchestrator | 2025-08-29 21:07:38.552505 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-08-29 21:07:38.552515 | orchestrator | Friday 29 August 2025 21:05:18 +0000 (0:00:00.999) 0:00:28.658 ********* 2025-08-29 21:07:38.552531 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:07:38.552540 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:07:38.552550 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:07:38.552560 | orchestrator | 2025-08-29 21:07:38.552570 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-08-29 21:07:38.552579 | orchestrator | Friday 29 August 2025 21:05:18 +0000 (0:00:00.279) 0:00:28.937 ********* 2025-08-29 21:07:38.552589 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 21:07:38.552598 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 21:07:38.552608 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 21:07:38.552617 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 21:07:38.552627 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 21:07:38.552637 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 21:07:38.552647 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 21:07:38.552661 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 21:07:38.552671 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 21:07:38.552681 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 21:07:38.552690 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 21:07:38.552700 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 21:07:38.552709 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 21:07:38.552719 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 21:07:38.552729 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 21:07:38.552738 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 21:07:38.552748 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 21:07:38.552758 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 21:07:38.552768 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 21:07:38.552777 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 21:07:38.552786 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 21:07:38.552796 | orchestrator | 2025-08-29 21:07:38.552806 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-08-29 21:07:38.552815 | orchestrator | Friday 29 August 2025 21:05:28 +0000 (0:00:09.533) 0:00:38.470 ********* 2025-08-29 21:07:38.552825 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 21:07:38.552834 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 21:07:38.552843 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 21:07:38.552853 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 21:07:38.552863 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 21:07:38.552872 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 21:07:38.552888 | orchestrator | 2025-08-29 21:07:38.552898 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-08-29 21:07:38.552912 | orchestrator | Friday 29 August 2025 21:05:31 +0000 (0:00:02.667) 0:00:41.137 ********* 2025-08-29 21:07:38.552923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.552938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.552950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 21:07:38.552961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.552984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.553010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 21:07:38.553021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.553035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.553046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 21:07:38.553056 | orchestrator | 2025-08-29 21:07:38.553065 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 21:07:38.553075 | orchestrator | Friday 29 August 2025 21:05:33 +0000 (0:00:02.413) 0:00:43.551 ********* 2025-08-29 21:07:38.553085 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.553095 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:07:38.553105 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:07:38.553115 | orchestrator | 2025-08-29 21:07:38.553124 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-08-29 21:07:38.553134 | orchestrator | Friday 29 August 2025 21:05:33 +0000 (0:00:00.260) 0:00:43.812 ********* 2025-08-29 21:07:38.553144 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:07:38.553159 | orchestrator | 2025-08-29 21:07:38.553169 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-08-29 21:07:38.553179 | orchestrator | Friday 29 August 2025 21:05:36 +0000 (0:00:02.274) 0:00:46.087 ********* 2025-08-29 21:07:38.553189 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:07:38.553198 | orchestrator | 2025-08-29 21:07:38.553208 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-08-29 21:07:38.553218 | orchestrator | Friday 29 August 2025 21:05:38 +0000 (0:00:02.149) 0:00:48.236 ********* 2025-08-29 21:07:38.553228 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:07:38.553238 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:07:38.553248 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:07:38.553257 | orchestrator | 2025-08-29 21:07:38.553267 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-08-29 21:07:38.553277 | orchestrator | Friday 29 August 2025 21:05:39 +0000 (0:00:01.145) 0:00:49.381 ********* 2025-08-29 21:07:38.553287 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:07:38.553297 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:07:38.553306 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:07:38.553316 | orchestrator | 2025-08-29 21:07:38.553331 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-08-29 21:07:38.553341 | orchestrator | Friday 29 August 2025 21:05:39 +0000 (0:00:00.324) 0:00:49.706 ********* 2025-08-29 21:07:38.553351 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.553361 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:07:38.553371 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:07:38.553380 | orchestrator | 2025-08-29 21:07:38.553390 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-08-29 21:07:38.553399 | orchestrator | Friday 29 August 2025 21:05:40 +0000 (0:00:00.358) 0:00:50.064 ********* 2025-08-29 21:07:38.553409 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:07:38.553419 | orchestrator | 2025-08-29 21:07:38.553428 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-08-29 21:07:38.553438 | orchestrator | Friday 29 August 2025 21:05:54 +0000 (0:00:14.142) 0:01:04.207 ********* 2025-08-29 21:07:38.553448 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:07:38.553458 | orchestrator | 2025-08-29 21:07:38.553467 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 21:07:38.553477 | orchestrator | Friday 29 August 2025 21:06:04 +0000 (0:00:10.342) 0:01:14.549 ********* 2025-08-29 21:07:38.553487 | orchestrator | 2025-08-29 21:07:38.553496 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 21:07:38.553506 | orchestrator | Friday 29 August 2025 21:06:04 +0000 (0:00:00.063) 0:01:14.612 ********* 2025-08-29 21:07:38.553516 | orchestrator | 2025-08-29 21:07:38.553525 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 21:07:38.553535 | orchestrator | Friday 29 August 2025 21:06:04 +0000 (0:00:00.235) 0:01:14.848 ********* 2025-08-29 21:07:38.553545 | orchestrator | 2025-08-29 21:07:38.553554 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-08-29 21:07:38.553564 | orchestrator | Friday 29 August 2025 21:06:04 +0000 (0:00:00.064) 0:01:14.912 ********* 2025-08-29 21:07:38.553574 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:07:38.553583 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:07:38.553593 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:07:38.553603 | orchestrator | 2025-08-29 21:07:38.553612 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-08-29 21:07:38.553622 | orchestrator | Friday 29 August 2025 21:06:27 +0000 (0:00:22.525) 0:01:37.437 ********* 2025-08-29 21:07:38.553632 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:07:38.553641 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:07:38.553651 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:07:38.553661 | orchestrator | 2025-08-29 21:07:38.553671 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-08-29 21:07:38.553680 | orchestrator | Friday 29 August 2025 21:06:38 +0000 (0:00:10.639) 0:01:48.076 ********* 2025-08-29 21:07:38.553695 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:07:38.553705 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:07:38.553719 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:07:38.553729 | orchestrator | 2025-08-29 21:07:38.553739 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 21:07:38.553748 | orchestrator | Friday 29 August 2025 21:06:50 +0000 (0:00:12.043) 0:02:00.120 ********* 2025-08-29 21:07:38.553758 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:07:38.553768 | orchestrator | 2025-08-29 21:07:38.553778 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-08-29 21:07:38.553787 | orchestrator | Friday 29 August 2025 21:06:50 +0000 (0:00:00.729) 0:02:00.849 ********* 2025-08-29 21:07:38.553797 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:07:38.553807 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:07:38.553816 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:07:38.553826 | orchestrator | 2025-08-29 21:07:38.553836 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-08-29 21:07:38.553846 | orchestrator | Friday 29 August 2025 21:06:51 +0000 (0:00:00.762) 0:02:01.612 ********* 2025-08-29 21:07:38.553855 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:07:38.553865 | orchestrator | 2025-08-29 21:07:38.553875 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-08-29 21:07:38.553885 | orchestrator | Friday 29 August 2025 21:06:53 +0000 (0:00:01.760) 0:02:03.372 ********* 2025-08-29 21:07:38.553895 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-08-29 21:07:38.553905 | orchestrator | 2025-08-29 21:07:38.553914 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-08-29 21:07:38.553924 | orchestrator | Friday 29 August 2025 21:07:03 +0000 (0:00:10.218) 0:02:13.590 ********* 2025-08-29 21:07:38.553934 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-08-29 21:07:38.553943 | orchestrator | 2025-08-29 21:07:38.553953 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-08-29 21:07:38.553963 | orchestrator | Friday 29 August 2025 21:07:24 +0000 (0:00:21.407) 0:02:34.998 ********* 2025-08-29 21:07:38.553972 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-08-29 21:07:38.553982 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-08-29 21:07:38.554061 | orchestrator | 2025-08-29 21:07:38.554073 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-08-29 21:07:38.554083 | orchestrator | Friday 29 August 2025 21:07:31 +0000 (0:00:06.653) 0:02:41.651 ********* 2025-08-29 21:07:38.554093 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.554103 | orchestrator | 2025-08-29 21:07:38.554113 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-08-29 21:07:38.554122 | orchestrator | Friday 29 August 2025 21:07:31 +0000 (0:00:00.123) 0:02:41.774 ********* 2025-08-29 21:07:38.554132 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.554142 | orchestrator | 2025-08-29 21:07:38.554152 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-08-29 21:07:38.554162 | orchestrator | Friday 29 August 2025 21:07:32 +0000 (0:00:00.267) 0:02:42.042 ********* 2025-08-29 21:07:38.554172 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.554181 | orchestrator | 2025-08-29 21:07:38.554197 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-08-29 21:07:38.554207 | orchestrator | Friday 29 August 2025 21:07:32 +0000 (0:00:00.141) 0:02:42.184 ********* 2025-08-29 21:07:38.554217 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.554227 | orchestrator | 2025-08-29 21:07:38.554236 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-08-29 21:07:38.554246 | orchestrator | Friday 29 August 2025 21:07:32 +0000 (0:00:00.313) 0:02:42.497 ********* 2025-08-29 21:07:38.554263 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:07:38.554273 | orchestrator | 2025-08-29 21:07:38.554282 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 21:07:38.554292 | orchestrator | Friday 29 August 2025 21:07:35 +0000 (0:00:03.080) 0:02:45.578 ********* 2025-08-29 21:07:38.554302 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:07:38.554312 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:07:38.554321 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:07:38.554331 | orchestrator | 2025-08-29 21:07:38.554341 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:07:38.554351 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-08-29 21:07:38.554361 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 21:07:38.554371 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 21:07:38.554381 | orchestrator | 2025-08-29 21:07:38.554391 | orchestrator | 2025-08-29 21:07:38.554400 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:07:38.554410 | orchestrator | Friday 29 August 2025 21:07:35 +0000 (0:00:00.386) 0:02:45.964 ********* 2025-08-29 21:07:38.554420 | orchestrator | =============================================================================== 2025-08-29 21:07:38.554429 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 22.53s 2025-08-29 21:07:38.554439 | orchestrator | service-ks-register : keystone | Creating services --------------------- 21.41s 2025-08-29 21:07:38.554449 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.14s 2025-08-29 21:07:38.554458 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.04s 2025-08-29 21:07:38.554468 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.64s 2025-08-29 21:07:38.554483 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.34s 2025-08-29 21:07:38.554492 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.22s 2025-08-29 21:07:38.554502 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.53s 2025-08-29 21:07:38.554512 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.65s 2025-08-29 21:07:38.554521 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.29s 2025-08-29 21:07:38.554531 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.72s 2025-08-29 21:07:38.554540 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.72s 2025-08-29 21:07:38.554548 | orchestrator | keystone : Creating default user role ----------------------------------- 3.08s 2025-08-29 21:07:38.554556 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.67s 2025-08-29 21:07:38.554563 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.41s 2025-08-29 21:07:38.554571 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.38s 2025-08-29 21:07:38.554579 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.27s 2025-08-29 21:07:38.554587 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.15s 2025-08-29 21:07:38.554595 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.76s 2025-08-29 21:07:38.554603 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.66s 2025-08-29 21:07:38.554611 | orchestrator | 2025-08-29 21:07:38 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:07:38.554619 | orchestrator | 2025-08-29 21:07:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:41.580209 | orchestrator | 2025-08-29 21:07:41 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:07:41.580309 | orchestrator | 2025-08-29 21:07:41 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:07:41.580793 | orchestrator | 2025-08-29 21:07:41 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:07:41.582206 | orchestrator | 2025-08-29 21:07:41 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:41.582630 | orchestrator | 2025-08-29 21:07:41 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:07:41.582657 | orchestrator | 2025-08-29 21:07:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:44.612197 | orchestrator | 2025-08-29 21:07:44 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:07:44.612290 | orchestrator | 2025-08-29 21:07:44 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:07:44.612306 | orchestrator | 2025-08-29 21:07:44 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:07:44.614403 | orchestrator | 2025-08-29 21:07:44 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:44.614433 | orchestrator | 2025-08-29 21:07:44 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:07:44.614446 | orchestrator | 2025-08-29 21:07:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:47.660789 | orchestrator | 2025-08-29 21:07:47 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:07:47.661559 | orchestrator | 2025-08-29 21:07:47 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:07:47.663520 | orchestrator | 2025-08-29 21:07:47 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:07:47.665696 | orchestrator | 2025-08-29 21:07:47 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:47.669797 | orchestrator | 2025-08-29 21:07:47 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:07:47.669849 | orchestrator | 2025-08-29 21:07:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:50.713329 | orchestrator | 2025-08-29 21:07:50 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:07:50.716054 | orchestrator | 2025-08-29 21:07:50 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:07:50.717465 | orchestrator | 2025-08-29 21:07:50 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:07:50.720616 | orchestrator | 2025-08-29 21:07:50 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state STARTED 2025-08-29 21:07:50.723139 | orchestrator | 2025-08-29 21:07:50 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:07:50.723310 | orchestrator | 2025-08-29 21:07:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:53.764384 | orchestrator | 2025-08-29 21:07:53 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:07:53.766072 | orchestrator | 2025-08-29 21:07:53 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:07:53.767965 | orchestrator | 2025-08-29 21:07:53 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:07:53.770380 | orchestrator | 2025-08-29 21:07:53 | INFO  | Task 76014f1e-6e3e-4dc8-868e-09619ba61b76 is in state SUCCESS 2025-08-29 21:07:53.773233 | orchestrator | 2025-08-29 21:07:53 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:07:53.773287 | orchestrator | 2025-08-29 21:07:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:56.818420 | orchestrator | 2025-08-29 21:07:56 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:07:56.819632 | orchestrator | 2025-08-29 21:07:56 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:07:56.821037 | orchestrator | 2025-08-29 21:07:56 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:07:56.822340 | orchestrator | 2025-08-29 21:07:56 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:07:56.823203 | orchestrator | 2025-08-29 21:07:56 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:07:56.823235 | orchestrator | 2025-08-29 21:07:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:07:59.863265 | orchestrator | 2025-08-29 21:07:59 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:07:59.864713 | orchestrator | 2025-08-29 21:07:59 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:07:59.866969 | orchestrator | 2025-08-29 21:07:59 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:07:59.868245 | orchestrator | 2025-08-29 21:07:59 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:07:59.869706 | orchestrator | 2025-08-29 21:07:59 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:07:59.869729 | orchestrator | 2025-08-29 21:07:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:02.904768 | orchestrator | 2025-08-29 21:08:02 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:02.906402 | orchestrator | 2025-08-29 21:08:02 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:02.908257 | orchestrator | 2025-08-29 21:08:02 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:02.910178 | orchestrator | 2025-08-29 21:08:02 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:02.911687 | orchestrator | 2025-08-29 21:08:02 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:02.911855 | orchestrator | 2025-08-29 21:08:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:05.952971 | orchestrator | 2025-08-29 21:08:05 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:05.954458 | orchestrator | 2025-08-29 21:08:05 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:05.959210 | orchestrator | 2025-08-29 21:08:05 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:05.959285 | orchestrator | 2025-08-29 21:08:05 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:05.960417 | orchestrator | 2025-08-29 21:08:05 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:05.960570 | orchestrator | 2025-08-29 21:08:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:08.995424 | orchestrator | 2025-08-29 21:08:08 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:08.996272 | orchestrator | 2025-08-29 21:08:08 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:08.997063 | orchestrator | 2025-08-29 21:08:08 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:08.997913 | orchestrator | 2025-08-29 21:08:08 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:08.998806 | orchestrator | 2025-08-29 21:08:08 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:08.998852 | orchestrator | 2025-08-29 21:08:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:12.040486 | orchestrator | 2025-08-29 21:08:12 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:12.041500 | orchestrator | 2025-08-29 21:08:12 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:12.043792 | orchestrator | 2025-08-29 21:08:12 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:12.045154 | orchestrator | 2025-08-29 21:08:12 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:12.046333 | orchestrator | 2025-08-29 21:08:12 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:12.046365 | orchestrator | 2025-08-29 21:08:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:15.080797 | orchestrator | 2025-08-29 21:08:15 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:15.081392 | orchestrator | 2025-08-29 21:08:15 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:15.081966 | orchestrator | 2025-08-29 21:08:15 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:15.083120 | orchestrator | 2025-08-29 21:08:15 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:15.084151 | orchestrator | 2025-08-29 21:08:15 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:15.084182 | orchestrator | 2025-08-29 21:08:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:18.120097 | orchestrator | 2025-08-29 21:08:18 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:18.120179 | orchestrator | 2025-08-29 21:08:18 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:18.120188 | orchestrator | 2025-08-29 21:08:18 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:18.120407 | orchestrator | 2025-08-29 21:08:18 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:18.121467 | orchestrator | 2025-08-29 21:08:18 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:18.121504 | orchestrator | 2025-08-29 21:08:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:21.156962 | orchestrator | 2025-08-29 21:08:21 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:21.158208 | orchestrator | 2025-08-29 21:08:21 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:21.161105 | orchestrator | 2025-08-29 21:08:21 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:21.162187 | orchestrator | 2025-08-29 21:08:21 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:21.163662 | orchestrator | 2025-08-29 21:08:21 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:21.163685 | orchestrator | 2025-08-29 21:08:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:24.190786 | orchestrator | 2025-08-29 21:08:24 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:24.190895 | orchestrator | 2025-08-29 21:08:24 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:24.192580 | orchestrator | 2025-08-29 21:08:24 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:24.192607 | orchestrator | 2025-08-29 21:08:24 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:24.192619 | orchestrator | 2025-08-29 21:08:24 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:24.192630 | orchestrator | 2025-08-29 21:08:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:27.224499 | orchestrator | 2025-08-29 21:08:27 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:27.224923 | orchestrator | 2025-08-29 21:08:27 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:27.225302 | orchestrator | 2025-08-29 21:08:27 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:27.225858 | orchestrator | 2025-08-29 21:08:27 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:27.226676 | orchestrator | 2025-08-29 21:08:27 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:27.226703 | orchestrator | 2025-08-29 21:08:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:30.260909 | orchestrator | 2025-08-29 21:08:30 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:30.261469 | orchestrator | 2025-08-29 21:08:30 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:30.262903 | orchestrator | 2025-08-29 21:08:30 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:30.263825 | orchestrator | 2025-08-29 21:08:30 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:30.264751 | orchestrator | 2025-08-29 21:08:30 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:30.264775 | orchestrator | 2025-08-29 21:08:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:33.289383 | orchestrator | 2025-08-29 21:08:33 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:33.289876 | orchestrator | 2025-08-29 21:08:33 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:33.290884 | orchestrator | 2025-08-29 21:08:33 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:33.291631 | orchestrator | 2025-08-29 21:08:33 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:33.292583 | orchestrator | 2025-08-29 21:08:33 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:33.292604 | orchestrator | 2025-08-29 21:08:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:36.323025 | orchestrator | 2025-08-29 21:08:36 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:36.323114 | orchestrator | 2025-08-29 21:08:36 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:36.323417 | orchestrator | 2025-08-29 21:08:36 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:36.324030 | orchestrator | 2025-08-29 21:08:36 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:36.324769 | orchestrator | 2025-08-29 21:08:36 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:36.324815 | orchestrator | 2025-08-29 21:08:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:39.359422 | orchestrator | 2025-08-29 21:08:39 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:39.359856 | orchestrator | 2025-08-29 21:08:39 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:39.360224 | orchestrator | 2025-08-29 21:08:39 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:39.361787 | orchestrator | 2025-08-29 21:08:39 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:39.362146 | orchestrator | 2025-08-29 21:08:39 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:39.362172 | orchestrator | 2025-08-29 21:08:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:42.384074 | orchestrator | 2025-08-29 21:08:42 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:42.384405 | orchestrator | 2025-08-29 21:08:42 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:42.384983 | orchestrator | 2025-08-29 21:08:42 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:42.385679 | orchestrator | 2025-08-29 21:08:42 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:42.386362 | orchestrator | 2025-08-29 21:08:42 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:42.386387 | orchestrator | 2025-08-29 21:08:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:45.406919 | orchestrator | 2025-08-29 21:08:45 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:45.407136 | orchestrator | 2025-08-29 21:08:45 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:45.408054 | orchestrator | 2025-08-29 21:08:45 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:45.409039 | orchestrator | 2025-08-29 21:08:45 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:45.409789 | orchestrator | 2025-08-29 21:08:45 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:45.409809 | orchestrator | 2025-08-29 21:08:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:48.438944 | orchestrator | 2025-08-29 21:08:48 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:48.440052 | orchestrator | 2025-08-29 21:08:48 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:48.440597 | orchestrator | 2025-08-29 21:08:48 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:48.441398 | orchestrator | 2025-08-29 21:08:48 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:48.441847 | orchestrator | 2025-08-29 21:08:48 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:48.441873 | orchestrator | 2025-08-29 21:08:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:51.480476 | orchestrator | 2025-08-29 21:08:51 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:51.483372 | orchestrator | 2025-08-29 21:08:51 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:51.485852 | orchestrator | 2025-08-29 21:08:51 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:51.486389 | orchestrator | 2025-08-29 21:08:51 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:51.486967 | orchestrator | 2025-08-29 21:08:51 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:51.486996 | orchestrator | 2025-08-29 21:08:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:54.515927 | orchestrator | 2025-08-29 21:08:54 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:54.516057 | orchestrator | 2025-08-29 21:08:54 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:54.516187 | orchestrator | 2025-08-29 21:08:54 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:54.517012 | orchestrator | 2025-08-29 21:08:54 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:54.517671 | orchestrator | 2025-08-29 21:08:54 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:54.517698 | orchestrator | 2025-08-29 21:08:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:08:57.552641 | orchestrator | 2025-08-29 21:08:57 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:08:57.554133 | orchestrator | 2025-08-29 21:08:57 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state STARTED 2025-08-29 21:08:57.556127 | orchestrator | 2025-08-29 21:08:57 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:08:57.557005 | orchestrator | 2025-08-29 21:08:57 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:08:57.559146 | orchestrator | 2025-08-29 21:08:57 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:08:57.559177 | orchestrator | 2025-08-29 21:08:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:00.584286 | orchestrator | 2025-08-29 21:09:00 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:09:00.584554 | orchestrator | 2025-08-29 21:09:00 | INFO  | Task c783ac4e-0f7b-4417-aaf7-81c85b143f3b is in state SUCCESS 2025-08-29 21:09:00.584933 | orchestrator | 2025-08-29 21:09:00.584999 | orchestrator | 2025-08-29 21:09:00.585012 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-08-29 21:09:00.585024 | orchestrator | 2025-08-29 21:09:00.585036 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-08-29 21:09:00.585047 | orchestrator | Friday 29 August 2025 21:06:58 +0000 (0:00:00.235) 0:00:00.235 ********* 2025-08-29 21:09:00.585058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-08-29 21:09:00.585071 | orchestrator | 2025-08-29 21:09:00.585083 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-08-29 21:09:00.585094 | orchestrator | Friday 29 August 2025 21:06:58 +0000 (0:00:00.216) 0:00:00.451 ********* 2025-08-29 21:09:00.585105 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-08-29 21:09:00.585116 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-08-29 21:09:00.585128 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-08-29 21:09:00.585139 | orchestrator | 2025-08-29 21:09:00.585163 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-08-29 21:09:00.585175 | orchestrator | Friday 29 August 2025 21:06:59 +0000 (0:00:01.201) 0:00:01.653 ********* 2025-08-29 21:09:00.585186 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-08-29 21:09:00.585197 | orchestrator | 2025-08-29 21:09:00.585209 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-08-29 21:09:00.585240 | orchestrator | Friday 29 August 2025 21:07:00 +0000 (0:00:01.107) 0:00:02.761 ********* 2025-08-29 21:09:00.585252 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:00.585263 | orchestrator | 2025-08-29 21:09:00.585275 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-08-29 21:09:00.585286 | orchestrator | Friday 29 August 2025 21:07:01 +0000 (0:00:00.924) 0:00:03.686 ********* 2025-08-29 21:09:00.585296 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:00.585307 | orchestrator | 2025-08-29 21:09:00.585318 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-08-29 21:09:00.585329 | orchestrator | Friday 29 August 2025 21:07:02 +0000 (0:00:00.880) 0:00:04.566 ********* 2025-08-29 21:09:00.585340 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-08-29 21:09:00.585351 | orchestrator | ok: [testbed-manager] 2025-08-29 21:09:00.585362 | orchestrator | 2025-08-29 21:09:00.585372 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-08-29 21:09:00.585383 | orchestrator | Friday 29 August 2025 21:07:44 +0000 (0:00:41.692) 0:00:46.259 ********* 2025-08-29 21:09:00.585394 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-08-29 21:09:00.585405 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-08-29 21:09:00.585416 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-08-29 21:09:00.585427 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-08-29 21:09:00.585438 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-08-29 21:09:00.585449 | orchestrator | 2025-08-29 21:09:00.585460 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-08-29 21:09:00.585566 | orchestrator | Friday 29 August 2025 21:07:47 +0000 (0:00:03.406) 0:00:49.665 ********* 2025-08-29 21:09:00.585579 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-08-29 21:09:00.585590 | orchestrator | 2025-08-29 21:09:00.585601 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-08-29 21:09:00.585612 | orchestrator | Friday 29 August 2025 21:07:47 +0000 (0:00:00.430) 0:00:50.095 ********* 2025-08-29 21:09:00.585622 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:09:00.585633 | orchestrator | 2025-08-29 21:09:00.585644 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-08-29 21:09:00.585655 | orchestrator | Friday 29 August 2025 21:07:48 +0000 (0:00:00.144) 0:00:50.240 ********* 2025-08-29 21:09:00.585666 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:09:00.585677 | orchestrator | 2025-08-29 21:09:00.585688 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-08-29 21:09:00.585699 | orchestrator | Friday 29 August 2025 21:07:48 +0000 (0:00:00.296) 0:00:50.537 ********* 2025-08-29 21:09:00.585710 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:00.585721 | orchestrator | 2025-08-29 21:09:00.585731 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-08-29 21:09:00.585742 | orchestrator | Friday 29 August 2025 21:07:50 +0000 (0:00:01.690) 0:00:52.227 ********* 2025-08-29 21:09:00.585753 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:00.585764 | orchestrator | 2025-08-29 21:09:00.585775 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-08-29 21:09:00.585785 | orchestrator | Friday 29 August 2025 21:07:50 +0000 (0:00:00.719) 0:00:52.947 ********* 2025-08-29 21:09:00.585796 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:00.585807 | orchestrator | 2025-08-29 21:09:00.585818 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-08-29 21:09:00.585829 | orchestrator | Friday 29 August 2025 21:07:51 +0000 (0:00:00.634) 0:00:53.581 ********* 2025-08-29 21:09:00.585840 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-08-29 21:09:00.585850 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-08-29 21:09:00.585861 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-08-29 21:09:00.585872 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-08-29 21:09:00.585893 | orchestrator | 2025-08-29 21:09:00.585904 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:09:00.585916 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:09:00.585928 | orchestrator | 2025-08-29 21:09:00.585939 | orchestrator | 2025-08-29 21:09:00.585981 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:09:00.585993 | orchestrator | Friday 29 August 2025 21:07:52 +0000 (0:00:01.397) 0:00:54.978 ********* 2025-08-29 21:09:00.586004 | orchestrator | =============================================================================== 2025-08-29 21:09:00.586015 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.69s 2025-08-29 21:09:00.586069 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.41s 2025-08-29 21:09:00.586107 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.69s 2025-08-29 21:09:00.586119 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.40s 2025-08-29 21:09:00.586129 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.20s 2025-08-29 21:09:00.586140 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.11s 2025-08-29 21:09:00.586151 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.92s 2025-08-29 21:09:00.586168 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2025-08-29 21:09:00.586180 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.72s 2025-08-29 21:09:00.586193 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.63s 2025-08-29 21:09:00.586292 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2025-08-29 21:09:00.586310 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-08-29 21:09:00.586324 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-08-29 21:09:00.586337 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-08-29 21:09:00.586350 | orchestrator | 2025-08-29 21:09:00.586363 | orchestrator | 2025-08-29 21:09:00.586376 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-08-29 21:09:00.586388 | orchestrator | 2025-08-29 21:09:00.586401 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-08-29 21:09:00.586414 | orchestrator | Friday 29 August 2025 21:07:40 +0000 (0:00:00.076) 0:00:00.076 ********* 2025-08-29 21:09:00.586427 | orchestrator | changed: [localhost] 2025-08-29 21:09:00.586440 | orchestrator | 2025-08-29 21:09:00.586453 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-08-29 21:09:00.586465 | orchestrator | Friday 29 August 2025 21:07:40 +0000 (0:00:00.787) 0:00:00.863 ********* 2025-08-29 21:09:00.586478 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-08-29 21:09:00.586491 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2025-08-29 21:09:00.586505 | orchestrator | changed: [localhost] 2025-08-29 21:09:00.586518 | orchestrator | 2025-08-29 21:09:00.586531 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-08-29 21:09:00.586544 | orchestrator | Friday 29 August 2025 21:08:53 +0000 (0:01:12.911) 0:01:13.775 ********* 2025-08-29 21:09:00.586555 | orchestrator | changed: [localhost] 2025-08-29 21:09:00.586566 | orchestrator | 2025-08-29 21:09:00.586577 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:09:00.586588 | orchestrator | 2025-08-29 21:09:00.586599 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:09:00.586610 | orchestrator | Friday 29 August 2025 21:08:58 +0000 (0:00:04.802) 0:01:18.578 ********* 2025-08-29 21:09:00.586621 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:09:00.586632 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:09:00.586652 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:09:00.586663 | orchestrator | 2025-08-29 21:09:00.586674 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:09:00.586685 | orchestrator | Friday 29 August 2025 21:08:59 +0000 (0:00:00.511) 0:01:19.090 ********* 2025-08-29 21:09:00.586696 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-08-29 21:09:00.586707 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-08-29 21:09:00.586718 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-08-29 21:09:00.586729 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-08-29 21:09:00.586740 | orchestrator | 2025-08-29 21:09:00.586751 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-08-29 21:09:00.586762 | orchestrator | skipping: no hosts matched 2025-08-29 21:09:00.586773 | orchestrator | 2025-08-29 21:09:00.586784 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:09:00.586795 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:09:00.586807 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:09:00.586819 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:09:00.586830 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:09:00.586841 | orchestrator | 2025-08-29 21:09:00.586852 | orchestrator | 2025-08-29 21:09:00.586863 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:09:00.586874 | orchestrator | Friday 29 August 2025 21:09:00 +0000 (0:00:00.867) 0:01:19.957 ********* 2025-08-29 21:09:00.586885 | orchestrator | =============================================================================== 2025-08-29 21:09:00.586896 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 72.91s 2025-08-29 21:09:00.586915 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.80s 2025-08-29 21:09:00.586926 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2025-08-29 21:09:00.586937 | orchestrator | Ensure the destination directory exists --------------------------------- 0.79s 2025-08-29 21:09:00.586968 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.51s 2025-08-29 21:09:00.586979 | orchestrator | 2025-08-29 21:09:00 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:00.586991 | orchestrator | 2025-08-29 21:09:00 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:00.587220 | orchestrator | 2025-08-29 21:09:00 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:00.587240 | orchestrator | 2025-08-29 21:09:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:03.613348 | orchestrator | 2025-08-29 21:09:03 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:09:03.613675 | orchestrator | 2025-08-29 21:09:03 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:03.614269 | orchestrator | 2025-08-29 21:09:03 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:03.614893 | orchestrator | 2025-08-29 21:09:03 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:03.615644 | orchestrator | 2025-08-29 21:09:03 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:03.615669 | orchestrator | 2025-08-29 21:09:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:06.636299 | orchestrator | 2025-08-29 21:09:06 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:09:06.636376 | orchestrator | 2025-08-29 21:09:06 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:06.637095 | orchestrator | 2025-08-29 21:09:06 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:06.637790 | orchestrator | 2025-08-29 21:09:06 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:06.638481 | orchestrator | 2025-08-29 21:09:06 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:06.638504 | orchestrator | 2025-08-29 21:09:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:09.685603 | orchestrator | 2025-08-29 21:09:09 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:09:09.685821 | orchestrator | 2025-08-29 21:09:09 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:09.686463 | orchestrator | 2025-08-29 21:09:09 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:09.687082 | orchestrator | 2025-08-29 21:09:09 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:09.687825 | orchestrator | 2025-08-29 21:09:09 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:09.687847 | orchestrator | 2025-08-29 21:09:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:12.711511 | orchestrator | 2025-08-29 21:09:12 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:09:12.711741 | orchestrator | 2025-08-29 21:09:12 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:12.712424 | orchestrator | 2025-08-29 21:09:12 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:12.713050 | orchestrator | 2025-08-29 21:09:12 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:12.713717 | orchestrator | 2025-08-29 21:09:12 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:12.713741 | orchestrator | 2025-08-29 21:09:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:15.751062 | orchestrator | 2025-08-29 21:09:15 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:09:15.751331 | orchestrator | 2025-08-29 21:09:15 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:15.751867 | orchestrator | 2025-08-29 21:09:15 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:15.752450 | orchestrator | 2025-08-29 21:09:15 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:15.753063 | orchestrator | 2025-08-29 21:09:15 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:15.753086 | orchestrator | 2025-08-29 21:09:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:18.784057 | orchestrator | 2025-08-29 21:09:18 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:09:18.785889 | orchestrator | 2025-08-29 21:09:18 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:18.788023 | orchestrator | 2025-08-29 21:09:18 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:18.789517 | orchestrator | 2025-08-29 21:09:18 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:18.791126 | orchestrator | 2025-08-29 21:09:18 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:18.791467 | orchestrator | 2025-08-29 21:09:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:21.814984 | orchestrator | 2025-08-29 21:09:21 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state STARTED 2025-08-29 21:09:21.816718 | orchestrator | 2025-08-29 21:09:21 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:21.817397 | orchestrator | 2025-08-29 21:09:21 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:21.817951 | orchestrator | 2025-08-29 21:09:21 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:21.818532 | orchestrator | 2025-08-29 21:09:21 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:21.818634 | orchestrator | 2025-08-29 21:09:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:24.855022 | orchestrator | 2025-08-29 21:09:24 | INFO  | Task e2f51a1b-c480-4a6b-a0b3-9445b39506b4 is in state SUCCESS 2025-08-29 21:09:24.857108 | orchestrator | 2025-08-29 21:09:24 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:24.859469 | orchestrator | 2025-08-29 21:09:24 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:24.861680 | orchestrator | 2025-08-29 21:09:24 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:24.863804 | orchestrator | 2025-08-29 21:09:24 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:24.864270 | orchestrator | 2025-08-29 21:09:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:27.891778 | orchestrator | 2025-08-29 21:09:27 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:27.892850 | orchestrator | 2025-08-29 21:09:27 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:27.894466 | orchestrator | 2025-08-29 21:09:27 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:27.895622 | orchestrator | 2025-08-29 21:09:27 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:27.896058 | orchestrator | 2025-08-29 21:09:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:30.929278 | orchestrator | 2025-08-29 21:09:30 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:30.929683 | orchestrator | 2025-08-29 21:09:30 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:30.931367 | orchestrator | 2025-08-29 21:09:30 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:30.931787 | orchestrator | 2025-08-29 21:09:30 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:30.931799 | orchestrator | 2025-08-29 21:09:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:33.957228 | orchestrator | 2025-08-29 21:09:33 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:33.957813 | orchestrator | 2025-08-29 21:09:33 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:33.958272 | orchestrator | 2025-08-29 21:09:33 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:33.958757 | orchestrator | 2025-08-29 21:09:33 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:33.958776 | orchestrator | 2025-08-29 21:09:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:36.991316 | orchestrator | 2025-08-29 21:09:36 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:36.991485 | orchestrator | 2025-08-29 21:09:36 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:36.992064 | orchestrator | 2025-08-29 21:09:36 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state STARTED 2025-08-29 21:09:36.992718 | orchestrator | 2025-08-29 21:09:36 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:36.992740 | orchestrator | 2025-08-29 21:09:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:40.013381 | orchestrator | 2025-08-29 21:09:40 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:40.014527 | orchestrator | 2025-08-29 21:09:40 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:40.016307 | orchestrator | 2025-08-29 21:09:40 | INFO  | Task 2274767a-6804-465d-b07b-23ff33788dd1 is in state SUCCESS 2025-08-29 21:09:40.019395 | orchestrator | 2025-08-29 21:09:40.019443 | orchestrator | 2025-08-29 21:09:40.019612 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-08-29 21:09:40.019631 | orchestrator | 2025-08-29 21:09:40.019642 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-08-29 21:09:40.019654 | orchestrator | Friday 29 August 2025 21:07:57 +0000 (0:00:00.248) 0:00:00.248 ********* 2025-08-29 21:09:40.019665 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:40.019677 | orchestrator | 2025-08-29 21:09:40.019689 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-08-29 21:09:40.019700 | orchestrator | Friday 29 August 2025 21:07:59 +0000 (0:00:02.134) 0:00:02.383 ********* 2025-08-29 21:09:40.019711 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:40.019722 | orchestrator | 2025-08-29 21:09:40.019747 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-08-29 21:09:40.019759 | orchestrator | Friday 29 August 2025 21:08:00 +0000 (0:00:00.970) 0:00:03.354 ********* 2025-08-29 21:09:40.019770 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:40.019781 | orchestrator | 2025-08-29 21:09:40.019792 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-08-29 21:09:40.019803 | orchestrator | Friday 29 August 2025 21:08:01 +0000 (0:00:00.927) 0:00:04.281 ********* 2025-08-29 21:09:40.019814 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:40.019825 | orchestrator | 2025-08-29 21:09:40.019836 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-08-29 21:09:40.019847 | orchestrator | Friday 29 August 2025 21:08:02 +0000 (0:00:01.038) 0:00:05.319 ********* 2025-08-29 21:09:40.019858 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:40.019870 | orchestrator | 2025-08-29 21:09:40.019881 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-08-29 21:09:40.019892 | orchestrator | Friday 29 August 2025 21:08:03 +0000 (0:00:00.936) 0:00:06.256 ********* 2025-08-29 21:09:40.019903 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:40.019913 | orchestrator | 2025-08-29 21:09:40.019943 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-08-29 21:09:40.019955 | orchestrator | Friday 29 August 2025 21:08:04 +0000 (0:00:00.924) 0:00:07.180 ********* 2025-08-29 21:09:40.019966 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:40.019979 | orchestrator | 2025-08-29 21:09:40.019992 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-08-29 21:09:40.020005 | orchestrator | Friday 29 August 2025 21:08:05 +0000 (0:00:01.137) 0:00:08.318 ********* 2025-08-29 21:09:40.020017 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:40.020030 | orchestrator | 2025-08-29 21:09:40.020044 | orchestrator | TASK [Create admin user] ******************************************************* 2025-08-29 21:09:40.020079 | orchestrator | Friday 29 August 2025 21:08:06 +0000 (0:00:01.067) 0:00:09.385 ********* 2025-08-29 21:09:40.020093 | orchestrator | changed: [testbed-manager] 2025-08-29 21:09:40.020106 | orchestrator | 2025-08-29 21:09:40.020119 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-08-29 21:09:40.020133 | orchestrator | Friday 29 August 2025 21:08:58 +0000 (0:00:52.704) 0:01:02.089 ********* 2025-08-29 21:09:40.020145 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:09:40.020158 | orchestrator | 2025-08-29 21:09:40.020171 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 21:09:40.020184 | orchestrator | 2025-08-29 21:09:40.020197 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 21:09:40.020210 | orchestrator | Friday 29 August 2025 21:08:59 +0000 (0:00:00.134) 0:01:02.224 ********* 2025-08-29 21:09:40.020222 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:09:40.020235 | orchestrator | 2025-08-29 21:09:40.020248 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 21:09:40.020261 | orchestrator | 2025-08-29 21:09:40.020274 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 21:09:40.020287 | orchestrator | Friday 29 August 2025 21:09:10 +0000 (0:00:11.502) 0:01:13.727 ********* 2025-08-29 21:09:40.020300 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:09:40.020314 | orchestrator | 2025-08-29 21:09:40.020327 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 21:09:40.020339 | orchestrator | 2025-08-29 21:09:40.020350 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 21:09:40.020361 | orchestrator | Friday 29 August 2025 21:09:11 +0000 (0:00:01.333) 0:01:15.060 ********* 2025-08-29 21:09:40.020372 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:09:40.020383 | orchestrator | 2025-08-29 21:09:40.020394 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:09:40.020406 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 21:09:40.020418 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:09:40.020429 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:09:40.020440 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:09:40.020451 | orchestrator | 2025-08-29 21:09:40.020462 | orchestrator | 2025-08-29 21:09:40.020473 | orchestrator | 2025-08-29 21:09:40.020484 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:09:40.020496 | orchestrator | Friday 29 August 2025 21:09:23 +0000 (0:00:11.337) 0:01:26.397 ********* 2025-08-29 21:09:40.020506 | orchestrator | =============================================================================== 2025-08-29 21:09:40.020517 | orchestrator | Create admin user ------------------------------------------------------ 52.70s 2025-08-29 21:09:40.020528 | orchestrator | Restart ceph manager service ------------------------------------------- 24.17s 2025-08-29 21:09:40.020558 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.13s 2025-08-29 21:09:40.020571 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.14s 2025-08-29 21:09:40.020582 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.07s 2025-08-29 21:09:40.020593 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.04s 2025-08-29 21:09:40.020604 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.97s 2025-08-29 21:09:40.020615 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.94s 2025-08-29 21:09:40.020626 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.93s 2025-08-29 21:09:40.020644 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.92s 2025-08-29 21:09:40.020655 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2025-08-29 21:09:40.020666 | orchestrator | 2025-08-29 21:09:40.020677 | orchestrator | 2025-08-29 21:09:40.020688 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:09:40.020699 | orchestrator | 2025-08-29 21:09:40.020710 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:09:40.020721 | orchestrator | Friday 29 August 2025 21:07:40 +0000 (0:00:00.200) 0:00:00.200 ********* 2025-08-29 21:09:40.020732 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:09:40.020743 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:09:40.020754 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:09:40.020765 | orchestrator | 2025-08-29 21:09:40.020776 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:09:40.020787 | orchestrator | Friday 29 August 2025 21:07:40 +0000 (0:00:00.291) 0:00:00.492 ********* 2025-08-29 21:09:40.020798 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-08-29 21:09:40.020809 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-08-29 21:09:40.020820 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-08-29 21:09:40.020831 | orchestrator | 2025-08-29 21:09:40.020842 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-08-29 21:09:40.020853 | orchestrator | 2025-08-29 21:09:40.020864 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 21:09:40.020874 | orchestrator | Friday 29 August 2025 21:07:40 +0000 (0:00:00.421) 0:00:00.913 ********* 2025-08-29 21:09:40.020885 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:09:40.020897 | orchestrator | 2025-08-29 21:09:40.020908 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-08-29 21:09:40.020932 | orchestrator | Friday 29 August 2025 21:07:41 +0000 (0:00:00.538) 0:00:01.452 ********* 2025-08-29 21:09:40.020944 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-08-29 21:09:40.020955 | orchestrator | 2025-08-29 21:09:40.020966 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-08-29 21:09:40.020977 | orchestrator | Friday 29 August 2025 21:07:45 +0000 (0:00:03.797) 0:00:05.249 ********* 2025-08-29 21:09:40.020988 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-08-29 21:09:40.021000 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-08-29 21:09:40.021011 | orchestrator | 2025-08-29 21:09:40.021022 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-08-29 21:09:40.021033 | orchestrator | Friday 29 August 2025 21:07:52 +0000 (0:00:06.901) 0:00:12.151 ********* 2025-08-29 21:09:40.021044 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 21:09:40.021054 | orchestrator | 2025-08-29 21:09:40.021065 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-08-29 21:09:40.021076 | orchestrator | Friday 29 August 2025 21:07:55 +0000 (0:00:03.098) 0:00:15.249 ********* 2025-08-29 21:09:40.021087 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 21:09:40.021098 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-08-29 21:09:40.021109 | orchestrator | 2025-08-29 21:09:40.021120 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-08-29 21:09:40.021131 | orchestrator | Friday 29 August 2025 21:07:58 +0000 (0:00:03.752) 0:00:19.002 ********* 2025-08-29 21:09:40.021142 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 21:09:40.021153 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-08-29 21:09:40.021164 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-08-29 21:09:40.021182 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-08-29 21:09:40.021193 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-08-29 21:09:40.021204 | orchestrator | 2025-08-29 21:09:40.021215 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-08-29 21:09:40.021226 | orchestrator | Friday 29 August 2025 21:08:14 +0000 (0:00:15.867) 0:00:34.870 ********* 2025-08-29 21:09:40.021237 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-08-29 21:09:40.021247 | orchestrator | 2025-08-29 21:09:40.021258 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-08-29 21:09:40.021269 | orchestrator | Friday 29 August 2025 21:08:19 +0000 (0:00:04.318) 0:00:39.188 ********* 2025-08-29 21:09:40.021296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.021312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.021325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.021337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021429 | orchestrator | 2025-08-29 21:09:40.021440 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-08-29 21:09:40.021451 | orchestrator | Friday 29 August 2025 21:08:21 +0000 (0:00:02.380) 0:00:41.569 ********* 2025-08-29 21:09:40.021463 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-08-29 21:09:40.021474 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-08-29 21:09:40.021485 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-08-29 21:09:40.021506 | orchestrator | 2025-08-29 21:09:40.021517 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-08-29 21:09:40.021528 | orchestrator | Friday 29 August 2025 21:08:23 +0000 (0:00:01.579) 0:00:43.148 ********* 2025-08-29 21:09:40.021539 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:09:40.021550 | orchestrator | 2025-08-29 21:09:40.021561 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-08-29 21:09:40.021572 | orchestrator | Friday 29 August 2025 21:08:23 +0000 (0:00:00.109) 0:00:43.258 ********* 2025-08-29 21:09:40.021584 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:09:40.021595 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:09:40.021607 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:09:40.021617 | orchestrator | 2025-08-29 21:09:40.021628 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 21:09:40.021640 | orchestrator | Friday 29 August 2025 21:08:23 +0000 (0:00:00.371) 0:00:43.630 ********* 2025-08-29 21:09:40.021651 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:09:40.021662 | orchestrator | 2025-08-29 21:09:40.021673 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-08-29 21:09:40.021684 | orchestrator | Friday 29 August 2025 21:08:23 +0000 (0:00:00.464) 0:00:44.094 ********* 2025-08-29 21:09:40.021707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.021721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.021733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.021752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.021838 | orchestrator | 2025-08-29 21:09:40.021849 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-08-29 21:09:40.021861 | orchestrator | Friday 29 August 2025 21:08:27 +0000 (0:00:03.410) 0:00:47.505 ********* 2025-08-29 21:09:40.021872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:09:40.021884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.021907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:09:40.021946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.021959 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:09:40.021972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.021996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.022008 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:09:40.022065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:09:40.022081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.022106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.022118 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:09:40.022130 | orchestrator | 2025-08-29 21:09:40.022141 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-08-29 21:09:40.022152 | orchestrator | Friday 29 August 2025 21:08:28 +0000 (0:00:01.439) 0:00:48.944 ********* 2025-08-29 21:09:40.022164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:09:40.022184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.022196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.022207 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:09:40.022219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:09:40.022556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.022632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.022671 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:09:40.022687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:09:40.022699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.022711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.022722 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:09:40.022734 | orchestrator | 2025-08-29 21:09:40.022746 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-08-29 21:09:40.022757 | orchestrator | Friday 29 August 2025 21:08:29 +0000 (0:00:00.921) 0:00:49.866 ********* 2025-08-29 21:09:40.022789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.022803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.022822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.022834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.022847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.022870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.022883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.022901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.022912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.022953 | orchestrator | 2025-08-29 21:09:40.022965 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-08-29 21:09:40.022978 | orchestrator | Friday 29 August 2025 21:08:33 +0000 (0:00:03.580) 0:00:53.447 ********* 2025-08-29 21:09:40.022989 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:09:40.023000 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:09:40.023011 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:09:40.023022 | orchestrator | 2025-08-29 21:09:40.023033 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-08-29 21:09:40.023044 | orchestrator | Friday 29 August 2025 21:08:36 +0000 (0:00:02.739) 0:00:56.187 ********* 2025-08-29 21:09:40.023055 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 21:09:40.023066 | orchestrator | 2025-08-29 21:09:40.023079 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-08-29 21:09:40.023091 | orchestrator | Friday 29 August 2025 21:08:37 +0000 (0:00:01.467) 0:00:57.654 ********* 2025-08-29 21:09:40.023104 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:09:40.023117 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:09:40.023129 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:09:40.023142 | orchestrator | 2025-08-29 21:09:40.023154 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-08-29 21:09:40.023167 | orchestrator | Friday 29 August 2025 21:08:38 +0000 (0:00:00.869) 0:00:58.524 ********* 2025-08-29 21:09:40.023180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.023206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.023227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.023240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023330 | orchestrator | 2025-08-29 21:09:40.023341 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-08-29 21:09:40.023352 | orchestrator | Friday 29 August 2025 21:08:47 +0000 (0:00:08.726) 0:01:07.250 ********* 2025-08-29 21:09:40.023363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:09:40.023375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.023387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.023404 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:09:40.023426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:09:40.023439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.023450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.023462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 21:09:40.023473 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:09:40.023485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.023512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:09:40.023524 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:09:40.023535 | orchestrator | 2025-08-29 21:09:40.023546 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-08-29 21:09:40.023557 | orchestrator | Friday 29 August 2025 21:08:48 +0000 (0:00:01.169) 0:01:08.420 ********* 2025-08-29 21:09:40.023569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.023581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.023592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 21:09:40.023638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:09:40.023696 | orchestrator | 2025-08-29 21:09:40.023708 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 21:09:40.023765 | orchestrator | Friday 29 August 2025 21:08:52 +0000 (0:00:03.861) 0:01:12.281 ********* 2025-08-29 21:09:40.023777 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:09:40.023788 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:09:40.023799 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:09:40.023810 | orchestrator | 2025-08-29 21:09:40.023821 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-08-29 21:09:40.023832 | orchestrator | Friday 29 August 2025 21:08:52 +0000 (0:00:00.457) 0:01:12.739 ********* 2025-08-29 21:09:40.023843 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:09:40.023853 | orchestrator | 2025-08-29 21:09:40.023864 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-08-29 21:09:40.023875 | orchestrator | Friday 29 August 2025 21:08:54 +0000 (0:00:02.031) 0:01:14.770 ********* 2025-08-29 21:09:40.023886 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:09:40.023896 | orchestrator | 2025-08-29 21:09:40.023907 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-08-29 21:09:40.023918 | orchestrator | Friday 29 August 2025 21:08:56 +0000 (0:00:02.217) 0:01:16.988 ********* 2025-08-29 21:09:40.023944 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:09:40.023955 | orchestrator | 2025-08-29 21:09:40.023965 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 21:09:40.023976 | orchestrator | Friday 29 August 2025 21:09:07 +0000 (0:00:11.057) 0:01:28.046 ********* 2025-08-29 21:09:40.023987 | orchestrator | 2025-08-29 21:09:40.023998 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 21:09:40.024009 | orchestrator | Friday 29 August 2025 21:09:08 +0000 (0:00:00.123) 0:01:28.169 ********* 2025-08-29 21:09:40.024020 | orchestrator | 2025-08-29 21:09:40.024042 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 21:09:40.024054 | orchestrator | Friday 29 August 2025 21:09:08 +0000 (0:00:00.122) 0:01:28.291 ********* 2025-08-29 21:09:40.024065 | orchestrator | 2025-08-29 21:09:40.024076 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-08-29 21:09:40.024086 | orchestrator | Friday 29 August 2025 21:09:08 +0000 (0:00:00.125) 0:01:28.417 ********* 2025-08-29 21:09:40.024097 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:09:40.024108 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:09:40.024119 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:09:40.024130 | orchestrator | 2025-08-29 21:09:40.024141 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-08-29 21:09:40.024152 | orchestrator | Friday 29 August 2025 21:09:20 +0000 (0:00:12.098) 0:01:40.515 ********* 2025-08-29 21:09:40.024163 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:09:40.024174 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:09:40.024185 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:09:40.024195 | orchestrator | 2025-08-29 21:09:40.024206 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-08-29 21:09:40.024217 | orchestrator | Friday 29 August 2025 21:09:30 +0000 (0:00:10.566) 0:01:51.082 ********* 2025-08-29 21:09:40.024227 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:09:40.024238 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:09:40.024249 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:09:40.024260 | orchestrator | 2025-08-29 21:09:40.024271 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:09:40.024282 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 21:09:40.024293 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 21:09:40.024304 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 21:09:40.024315 | orchestrator | 2025-08-29 21:09:40.024334 | orchestrator | 2025-08-29 21:09:40.024345 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:09:40.024356 | orchestrator | Friday 29 August 2025 21:09:39 +0000 (0:00:08.042) 0:01:59.125 ********* 2025-08-29 21:09:40.024366 | orchestrator | =============================================================================== 2025-08-29 21:09:40.024377 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.87s 2025-08-29 21:09:40.024388 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.10s 2025-08-29 21:09:40.024399 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.06s 2025-08-29 21:09:40.024410 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.57s 2025-08-29 21:09:40.024421 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.73s 2025-08-29 21:09:40.024431 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.04s 2025-08-29 21:09:40.024442 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.90s 2025-08-29 21:09:40.024453 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.32s 2025-08-29 21:09:40.024464 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.86s 2025-08-29 21:09:40.024474 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.80s 2025-08-29 21:09:40.024485 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.75s 2025-08-29 21:09:40.024496 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.58s 2025-08-29 21:09:40.024507 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.42s 2025-08-29 21:09:40.024517 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.10s 2025-08-29 21:09:40.024528 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.74s 2025-08-29 21:09:40.024539 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.38s 2025-08-29 21:09:40.024549 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.22s 2025-08-29 21:09:40.024560 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.03s 2025-08-29 21:09:40.024571 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.58s 2025-08-29 21:09:40.024582 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.47s 2025-08-29 21:09:40.024593 | orchestrator | 2025-08-29 21:09:40 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:40.024604 | orchestrator | 2025-08-29 21:09:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:43.043283 | orchestrator | 2025-08-29 21:09:43 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:43.043477 | orchestrator | 2025-08-29 21:09:43 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:43.044743 | orchestrator | 2025-08-29 21:09:43 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:09:43.045820 | orchestrator | 2025-08-29 21:09:43 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:43.045863 | orchestrator | 2025-08-29 21:09:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:46.066814 | orchestrator | 2025-08-29 21:09:46 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:46.066903 | orchestrator | 2025-08-29 21:09:46 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:46.067360 | orchestrator | 2025-08-29 21:09:46 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:09:46.067986 | orchestrator | 2025-08-29 21:09:46 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:46.068031 | orchestrator | 2025-08-29 21:09:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:49.090475 | orchestrator | 2025-08-29 21:09:49 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:49.090583 | orchestrator | 2025-08-29 21:09:49 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:49.091249 | orchestrator | 2025-08-29 21:09:49 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:09:49.091772 | orchestrator | 2025-08-29 21:09:49 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:49.091806 | orchestrator | 2025-08-29 21:09:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:52.113171 | orchestrator | 2025-08-29 21:09:52 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:52.113500 | orchestrator | 2025-08-29 21:09:52 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:52.114615 | orchestrator | 2025-08-29 21:09:52 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:09:52.115166 | orchestrator | 2025-08-29 21:09:52 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:52.115264 | orchestrator | 2025-08-29 21:09:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:55.146330 | orchestrator | 2025-08-29 21:09:55 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:55.148228 | orchestrator | 2025-08-29 21:09:55 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:55.151099 | orchestrator | 2025-08-29 21:09:55 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:09:55.154318 | orchestrator | 2025-08-29 21:09:55 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:55.154399 | orchestrator | 2025-08-29 21:09:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:09:58.196139 | orchestrator | 2025-08-29 21:09:58 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:09:58.197171 | orchestrator | 2025-08-29 21:09:58 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:09:58.199339 | orchestrator | 2025-08-29 21:09:58 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:09:58.200888 | orchestrator | 2025-08-29 21:09:58 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:09:58.200954 | orchestrator | 2025-08-29 21:09:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:01.253497 | orchestrator | 2025-08-29 21:10:01 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:01.253598 | orchestrator | 2025-08-29 21:10:01 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:10:01.255093 | orchestrator | 2025-08-29 21:10:01 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:01.257035 | orchestrator | 2025-08-29 21:10:01 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:10:01.257058 | orchestrator | 2025-08-29 21:10:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:04.313141 | orchestrator | 2025-08-29 21:10:04 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:04.315079 | orchestrator | 2025-08-29 21:10:04 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:10:04.318788 | orchestrator | 2025-08-29 21:10:04 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:04.320498 | orchestrator | 2025-08-29 21:10:04 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:10:04.320524 | orchestrator | 2025-08-29 21:10:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:07.357721 | orchestrator | 2025-08-29 21:10:07 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:07.361097 | orchestrator | 2025-08-29 21:10:07 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:10:07.362460 | orchestrator | 2025-08-29 21:10:07 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:07.364158 | orchestrator | 2025-08-29 21:10:07 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:10:07.364718 | orchestrator | 2025-08-29 21:10:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:10.403616 | orchestrator | 2025-08-29 21:10:10 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:10.405432 | orchestrator | 2025-08-29 21:10:10 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:10:10.406805 | orchestrator | 2025-08-29 21:10:10 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:10.408483 | orchestrator | 2025-08-29 21:10:10 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:10:10.408864 | orchestrator | 2025-08-29 21:10:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:13.457266 | orchestrator | 2025-08-29 21:10:13 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:13.458749 | orchestrator | 2025-08-29 21:10:13 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:10:13.461286 | orchestrator | 2025-08-29 21:10:13 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:13.461854 | orchestrator | 2025-08-29 21:10:13 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state STARTED 2025-08-29 21:10:13.462326 | orchestrator | 2025-08-29 21:10:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:16.500455 | orchestrator | 2025-08-29 21:10:16 | INFO  | Task c50e9251-6c97-4e57-93bf-43be9682932d is in state STARTED 2025-08-29 21:10:16.502510 | orchestrator | 2025-08-29 21:10:16 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:16.504651 | orchestrator | 2025-08-29 21:10:16 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:10:16.506215 | orchestrator | 2025-08-29 21:10:16 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:16.508940 | orchestrator | 2025-08-29 21:10:16 | INFO  | Task 0ce34cbb-a714-4d8f-8df4-f01190eacdf6 is in state SUCCESS 2025-08-29 21:10:16.510779 | orchestrator | 2025-08-29 21:10:16.510815 | orchestrator | 2025-08-29 21:10:16.510857 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:10:16.510867 | orchestrator | 2025-08-29 21:10:16.510874 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:10:16.510883 | orchestrator | Friday 29 August 2025 21:09:04 +0000 (0:00:00.202) 0:00:00.202 ********* 2025-08-29 21:10:16.510890 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:10:16.511040 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:10:16.511049 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:10:16.511099 | orchestrator | 2025-08-29 21:10:16.511109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:10:16.511117 | orchestrator | Friday 29 August 2025 21:09:05 +0000 (0:00:00.324) 0:00:00.527 ********* 2025-08-29 21:10:16.511389 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-08-29 21:10:16.511401 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-08-29 21:10:16.511409 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-08-29 21:10:16.511417 | orchestrator | 2025-08-29 21:10:16.511426 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-08-29 21:10:16.511434 | orchestrator | 2025-08-29 21:10:16.511442 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 21:10:16.511450 | orchestrator | Friday 29 August 2025 21:09:06 +0000 (0:00:00.887) 0:00:01.416 ********* 2025-08-29 21:10:16.511458 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:10:16.511467 | orchestrator | 2025-08-29 21:10:16.511475 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-08-29 21:10:16.511483 | orchestrator | Friday 29 August 2025 21:09:06 +0000 (0:00:00.700) 0:00:02.117 ********* 2025-08-29 21:10:16.511491 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-08-29 21:10:16.511499 | orchestrator | 2025-08-29 21:10:16.511507 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-08-29 21:10:16.511514 | orchestrator | Friday 29 August 2025 21:09:10 +0000 (0:00:03.554) 0:00:05.671 ********* 2025-08-29 21:10:16.511530 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-08-29 21:10:16.511539 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-08-29 21:10:16.511547 | orchestrator | 2025-08-29 21:10:16.511554 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-08-29 21:10:16.511575 | orchestrator | Friday 29 August 2025 21:09:16 +0000 (0:00:06.434) 0:00:12.105 ********* 2025-08-29 21:10:16.511583 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 21:10:16.511591 | orchestrator | 2025-08-29 21:10:16.511599 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-08-29 21:10:16.511607 | orchestrator | Friday 29 August 2025 21:09:20 +0000 (0:00:03.247) 0:00:15.353 ********* 2025-08-29 21:10:16.511615 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 21:10:16.511623 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-08-29 21:10:16.511631 | orchestrator | 2025-08-29 21:10:16.511639 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-08-29 21:10:16.511647 | orchestrator | Friday 29 August 2025 21:09:24 +0000 (0:00:04.342) 0:00:19.695 ********* 2025-08-29 21:10:16.511675 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 21:10:16.511683 | orchestrator | 2025-08-29 21:10:16.511691 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-08-29 21:10:16.511699 | orchestrator | Friday 29 August 2025 21:09:28 +0000 (0:00:03.934) 0:00:23.630 ********* 2025-08-29 21:10:16.511707 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-08-29 21:10:16.511715 | orchestrator | 2025-08-29 21:10:16.511723 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 21:10:16.511731 | orchestrator | Friday 29 August 2025 21:09:33 +0000 (0:00:04.688) 0:00:28.319 ********* 2025-08-29 21:10:16.511845 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:16.511854 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:10:16.511861 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:10:16.511869 | orchestrator | 2025-08-29 21:10:16.511876 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-08-29 21:10:16.511883 | orchestrator | Friday 29 August 2025 21:09:33 +0000 (0:00:00.720) 0:00:29.039 ********* 2025-08-29 21:10:16.511893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512024 | orchestrator | 2025-08-29 21:10:16.512037 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-08-29 21:10:16.512044 | orchestrator | Friday 29 August 2025 21:09:35 +0000 (0:00:01.697) 0:00:30.737 ********* 2025-08-29 21:10:16.512052 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:16.512059 | orchestrator | 2025-08-29 21:10:16.512066 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-08-29 21:10:16.512074 | orchestrator | Friday 29 August 2025 21:09:35 +0000 (0:00:00.177) 0:00:30.914 ********* 2025-08-29 21:10:16.512128 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:16.512137 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:10:16.512145 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:10:16.512152 | orchestrator | 2025-08-29 21:10:16.512159 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 21:10:16.512166 | orchestrator | Friday 29 August 2025 21:09:36 +0000 (0:00:00.405) 0:00:31.319 ********* 2025-08-29 21:10:16.512241 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:10:16.512292 | orchestrator | 2025-08-29 21:10:16.512299 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-08-29 21:10:16.512307 | orchestrator | Friday 29 August 2025 21:09:36 +0000 (0:00:00.567) 0:00:31.886 ********* 2025-08-29 21:10:16.512314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512352 | orchestrator | 2025-08-29 21:10:16.512359 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-08-29 21:10:16.512366 | orchestrator | Friday 29 August 2025 21:09:38 +0000 (0:00:01.798) 0:00:33.685 ********* 2025-08-29 21:10:16.512378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:10:16.512386 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:16.512394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:10:16.512428 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:10:16.512443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:10:16.512451 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:10:16.512458 | orchestrator | 2025-08-29 21:10:16.512465 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-08-29 21:10:16.512472 | orchestrator | Friday 29 August 2025 21:09:39 +0000 (0:00:01.056) 0:00:34.742 ********* 2025-08-29 21:10:16.512511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:10:16.512624 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:16.512636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:10:16.512650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:10:16.512657 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:10:16.512664 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:10:16.512672 | orchestrator | 2025-08-29 21:10:16.512679 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-08-29 21:10:16.512686 | orchestrator | Friday 29 August 2025 21:09:40 +0000 (0:00:00.690) 0:00:35.432 ********* 2025-08-29 21:10:16.512700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512822 | orchestrator | 2025-08-29 21:10:16.512830 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-08-29 21:10:16.512837 | orchestrator | Friday 29 August 2025 21:09:41 +0000 (0:00:01.622) 0:00:37.055 ********* 2025-08-29 21:10:16.512845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.512990 | orchestrator | 2025-08-29 21:10:16.512999 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-08-29 21:10:16.513006 | orchestrator | Friday 29 August 2025 21:09:45 +0000 (0:00:03.235) 0:00:40.290 ********* 2025-08-29 21:10:16.513013 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 21:10:16.513021 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 21:10:16.513028 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 21:10:16.513035 | orchestrator | 2025-08-29 21:10:16.513042 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-08-29 21:10:16.513049 | orchestrator | Friday 29 August 2025 21:09:46 +0000 (0:00:01.727) 0:00:42.018 ********* 2025-08-29 21:10:16.513062 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:16.513128 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:10:16.513138 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:10:16.513145 | orchestrator | 2025-08-29 21:10:16.513152 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-08-29 21:10:16.513178 | orchestrator | Friday 29 August 2025 21:09:48 +0000 (0:00:01.754) 0:00:43.772 ********* 2025-08-29 21:10:16.513187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:10:16.513194 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:16.513202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:10:16.513210 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:10:16.513608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 21:10:16.513634 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:10:16.513641 | orchestrator | 2025-08-29 21:10:16.513649 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-08-29 21:10:16.513656 | orchestrator | Friday 29 August 2025 21:09:49 +0000 (0:00:00.812) 0:00:44.585 ********* 2025-08-29 21:10:16.513664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.513683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.513691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 21:10:16.513699 | orchestrator | 2025-08-29 21:10:16.513706 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-08-29 21:10:16.513735 | orchestrator | Friday 29 August 2025 21:09:50 +0000 (0:00:01.523) 0:00:46.108 ********* 2025-08-29 21:10:16.513744 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:16.513752 | orchestrator | 2025-08-29 21:10:16.513759 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-08-29 21:10:16.513766 | orchestrator | Friday 29 August 2025 21:09:52 +0000 (0:00:02.011) 0:00:48.119 ********* 2025-08-29 21:10:16.513773 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:16.513780 | orchestrator | 2025-08-29 21:10:16.513788 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-08-29 21:10:16.513795 | orchestrator | Friday 29 August 2025 21:09:55 +0000 (0:00:02.359) 0:00:50.478 ********* 2025-08-29 21:10:16.513810 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:16.513818 | orchestrator | 2025-08-29 21:10:16.513832 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 21:10:16.513840 | orchestrator | Friday 29 August 2025 21:10:07 +0000 (0:00:12.717) 0:01:03.196 ********* 2025-08-29 21:10:16.513847 | orchestrator | 2025-08-29 21:10:16.513854 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 21:10:16.513862 | orchestrator | Friday 29 August 2025 21:10:08 +0000 (0:00:00.059) 0:01:03.255 ********* 2025-08-29 21:10:16.513869 | orchestrator | 2025-08-29 21:10:16.513876 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 21:10:16.513889 | orchestrator | Friday 29 August 2025 21:10:08 +0000 (0:00:00.058) 0:01:03.313 ********* 2025-08-29 21:10:16.514176 | orchestrator | 2025-08-29 21:10:16.514191 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-08-29 21:10:16.514199 | orchestrator | Friday 29 August 2025 21:10:08 +0000 (0:00:00.059) 0:01:03.373 ********* 2025-08-29 21:10:16.514206 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:16.514213 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:10:16.514220 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:10:16.514271 | orchestrator | 2025-08-29 21:10:16.514278 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:10:16.514285 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 21:10:16.514293 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 21:10:16.514299 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 21:10:16.514305 | orchestrator | 2025-08-29 21:10:16.514312 | orchestrator | 2025-08-29 21:10:16.514318 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:10:16.514324 | orchestrator | Friday 29 August 2025 21:10:13 +0000 (0:00:05.771) 0:01:09.144 ********* 2025-08-29 21:10:16.514330 | orchestrator | =============================================================================== 2025-08-29 21:10:16.514336 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.72s 2025-08-29 21:10:16.514350 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.43s 2025-08-29 21:10:16.514360 | orchestrator | placement : Restart placement-api container ----------------------------- 5.77s 2025-08-29 21:10:16.514367 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.69s 2025-08-29 21:10:16.514373 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.34s 2025-08-29 21:10:16.514379 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.93s 2025-08-29 21:10:16.514385 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.55s 2025-08-29 21:10:16.514392 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.25s 2025-08-29 21:10:16.514398 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.24s 2025-08-29 21:10:16.514404 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.36s 2025-08-29 21:10:16.514410 | orchestrator | placement : Creating placement databases -------------------------------- 2.01s 2025-08-29 21:10:16.514416 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.80s 2025-08-29 21:10:16.514422 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.75s 2025-08-29 21:10:16.514428 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.73s 2025-08-29 21:10:16.514525 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.70s 2025-08-29 21:10:16.514727 | orchestrator | placement : Copying over config.json files for services ----------------- 1.62s 2025-08-29 21:10:16.514733 | orchestrator | placement : Check placement containers ---------------------------------- 1.52s 2025-08-29 21:10:16.514739 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.06s 2025-08-29 21:10:16.514925 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2025-08-29 21:10:16.514932 | orchestrator | placement : Copying over existing policy file --------------------------- 0.81s 2025-08-29 21:10:16.514938 | orchestrator | 2025-08-29 21:10:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:19.550338 | orchestrator | 2025-08-29 21:10:19 | INFO  | Task c50e9251-6c97-4e57-93bf-43be9682932d is in state STARTED 2025-08-29 21:10:19.552090 | orchestrator | 2025-08-29 21:10:19 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:19.553729 | orchestrator | 2025-08-29 21:10:19 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:10:19.555242 | orchestrator | 2025-08-29 21:10:19 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:19.555268 | orchestrator | 2025-08-29 21:10:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:22.600952 | orchestrator | 2025-08-29 21:10:22 | INFO  | Task c50e9251-6c97-4e57-93bf-43be9682932d is in state SUCCESS 2025-08-29 21:10:22.601120 | orchestrator | 2025-08-29 21:10:22 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:22.602800 | orchestrator | 2025-08-29 21:10:22 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:10:22.603540 | orchestrator | 2025-08-29 21:10:22 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:22.604336 | orchestrator | 2025-08-29 21:10:22 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:22.604508 | orchestrator | 2025-08-29 21:10:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:25.639270 | orchestrator | 2025-08-29 21:10:25 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:25.641076 | orchestrator | 2025-08-29 21:10:25 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:10:25.642386 | orchestrator | 2025-08-29 21:10:25 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:25.643361 | orchestrator | 2025-08-29 21:10:25 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:25.643566 | orchestrator | 2025-08-29 21:10:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:28.677248 | orchestrator | 2025-08-29 21:10:28 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:28.679435 | orchestrator | 2025-08-29 21:10:28 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state STARTED 2025-08-29 21:10:28.679449 | orchestrator | 2025-08-29 21:10:28 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:28.679454 | orchestrator | 2025-08-29 21:10:28 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:28.679459 | orchestrator | 2025-08-29 21:10:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:31.709152 | orchestrator | 2025-08-29 21:10:31 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:31.713793 | orchestrator | 2025-08-29 21:10:31 | INFO  | Task a60d88a1-6e3c-4f7e-815a-bf343ebb0e8a is in state SUCCESS 2025-08-29 21:10:31.715104 | orchestrator | 2025-08-29 21:10:31.715193 | orchestrator | 2025-08-29 21:10:31.715210 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:10:31.715223 | orchestrator | 2025-08-29 21:10:31.715235 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:10:31.715247 | orchestrator | Friday 29 August 2025 21:10:18 +0000 (0:00:00.177) 0:00:00.177 ********* 2025-08-29 21:10:31.715259 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:10:31.715271 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:10:31.715282 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:10:31.715293 | orchestrator | 2025-08-29 21:10:31.715304 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:10:31.715316 | orchestrator | Friday 29 August 2025 21:10:18 +0000 (0:00:00.287) 0:00:00.464 ********* 2025-08-29 21:10:31.715349 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 21:10:31.715362 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 21:10:31.715373 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 21:10:31.715384 | orchestrator | 2025-08-29 21:10:31.715395 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-08-29 21:10:31.715406 | orchestrator | 2025-08-29 21:10:31.715418 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-08-29 21:10:31.715429 | orchestrator | Friday 29 August 2025 21:10:18 +0000 (0:00:00.598) 0:00:01.062 ********* 2025-08-29 21:10:31.715440 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:10:31.715451 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:10:31.715461 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:10:31.715472 | orchestrator | 2025-08-29 21:10:31.715483 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:10:31.715495 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:10:31.715507 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:10:31.715539 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:10:31.715551 | orchestrator | 2025-08-29 21:10:31.715562 | orchestrator | 2025-08-29 21:10:31.715572 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:10:31.715583 | orchestrator | Friday 29 August 2025 21:10:19 +0000 (0:00:00.696) 0:00:01.759 ********* 2025-08-29 21:10:31.715594 | orchestrator | =============================================================================== 2025-08-29 21:10:31.715605 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.70s 2025-08-29 21:10:31.715616 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-08-29 21:10:31.715627 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-08-29 21:10:31.715638 | orchestrator | 2025-08-29 21:10:31.715683 | orchestrator | 2025-08-29 21:10:31.715696 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:10:31.716133 | orchestrator | 2025-08-29 21:10:31.716149 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:10:31.716160 | orchestrator | Friday 29 August 2025 21:07:40 +0000 (0:00:00.243) 0:00:00.243 ********* 2025-08-29 21:10:31.716171 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:10:31.716182 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:10:31.716192 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:10:31.716203 | orchestrator | 2025-08-29 21:10:31.716214 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:10:31.716225 | orchestrator | Friday 29 August 2025 21:07:40 +0000 (0:00:00.298) 0:00:00.542 ********* 2025-08-29 21:10:31.716237 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-08-29 21:10:31.716248 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-08-29 21:10:31.716259 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-08-29 21:10:31.716270 | orchestrator | 2025-08-29 21:10:31.716280 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-08-29 21:10:31.716291 | orchestrator | 2025-08-29 21:10:31.716302 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 21:10:31.716313 | orchestrator | Friday 29 August 2025 21:07:40 +0000 (0:00:00.365) 0:00:00.908 ********* 2025-08-29 21:10:31.716324 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:10:31.716335 | orchestrator | 2025-08-29 21:10:31.716346 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-08-29 21:10:31.716371 | orchestrator | Friday 29 August 2025 21:07:41 +0000 (0:00:00.528) 0:00:01.436 ********* 2025-08-29 21:10:31.716712 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-08-29 21:10:31.716728 | orchestrator | 2025-08-29 21:10:31.716739 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-08-29 21:10:31.716750 | orchestrator | Friday 29 August 2025 21:07:45 +0000 (0:00:03.630) 0:00:05.066 ********* 2025-08-29 21:10:31.716761 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-08-29 21:10:31.716773 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-08-29 21:10:31.716785 | orchestrator | 2025-08-29 21:10:31.716796 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-08-29 21:10:31.716807 | orchestrator | Friday 29 August 2025 21:07:51 +0000 (0:00:06.492) 0:00:11.559 ********* 2025-08-29 21:10:31.716818 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-08-29 21:10:31.716829 | orchestrator | 2025-08-29 21:10:31.716841 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-08-29 21:10:31.716859 | orchestrator | Friday 29 August 2025 21:07:54 +0000 (0:00:03.172) 0:00:14.731 ********* 2025-08-29 21:10:31.717042 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 21:10:31.717061 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-08-29 21:10:31.717073 | orchestrator | 2025-08-29 21:10:31.717084 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-08-29 21:10:31.717096 | orchestrator | Friday 29 August 2025 21:07:58 +0000 (0:00:03.805) 0:00:18.537 ********* 2025-08-29 21:10:31.717107 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 21:10:31.717118 | orchestrator | 2025-08-29 21:10:31.717129 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-08-29 21:10:31.717141 | orchestrator | Friday 29 August 2025 21:08:01 +0000 (0:00:03.246) 0:00:21.783 ********* 2025-08-29 21:10:31.717152 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-08-29 21:10:31.717163 | orchestrator | 2025-08-29 21:10:31.717175 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-08-29 21:10:31.717186 | orchestrator | Friday 29 August 2025 21:08:06 +0000 (0:00:04.194) 0:00:25.978 ********* 2025-08-29 21:10:31.717200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.717217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.717238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.717251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717564 | orchestrator | 2025-08-29 21:10:31.717576 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-08-29 21:10:31.717588 | orchestrator | Friday 29 August 2025 21:08:09 +0000 (0:00:03.043) 0:00:29.021 ********* 2025-08-29 21:10:31.717599 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:31.717611 | orchestrator | 2025-08-29 21:10:31.717623 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-08-29 21:10:31.717634 | orchestrator | Friday 29 August 2025 21:08:09 +0000 (0:00:00.118) 0:00:29.139 ********* 2025-08-29 21:10:31.717646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:31.717659 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:10:31.717672 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:10:31.717684 | orchestrator | 2025-08-29 21:10:31.717697 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 21:10:31.717710 | orchestrator | Friday 29 August 2025 21:08:09 +0000 (0:00:00.256) 0:00:29.396 ********* 2025-08-29 21:10:31.717722 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:10:31.717735 | orchestrator | 2025-08-29 21:10:31.717748 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-08-29 21:10:31.717760 | orchestrator | Friday 29 August 2025 21:08:10 +0000 (0:00:00.656) 0:00:30.052 ********* 2025-08-29 21:10:31.717773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.717794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.717808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.717856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.717951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.718001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.718058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.718073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.718097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.718109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.718121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.718132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.718185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.718199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.718217 | orchestrator | 2025-08-29 21:10:31.718228 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-08-29 21:10:31.718239 | orchestrator | Friday 29 August 2025 21:08:16 +0000 (0:00:05.920) 0:00:35.972 ********* 2025-08-29 21:10:31.718251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.718263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:10:31.718275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718364 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:31.718376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.718388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:10:31.718400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718490 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:10:31.718501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.718513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:10:31.718524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718612 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:10:31.718623 | orchestrator | 2025-08-29 21:10:31.718634 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-08-29 21:10:31.718645 | orchestrator | Friday 29 August 2025 21:08:16 +0000 (0:00:00.931) 0:00:36.904 ********* 2025-08-29 21:10:31.718656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.718668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:10:31.718679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718767 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:31.718778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.718790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:10:31.718801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718945 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:10:31.718960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.718972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:10:31.718984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.718995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.719051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.719066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.719077 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:10:31.719088 | orchestrator | 2025-08-29 21:10:31.719099 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-08-29 21:10:31.719110 | orchestrator | Friday 29 August 2025 21:08:18 +0000 (0:00:01.355) 0:00:38.259 ********* 2025-08-29 21:10:31.719122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.719133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.719145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.719201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719431 | orchestrator | 2025-08-29 21:10:31.719440 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-08-29 21:10:31.719450 | orchestrator | Friday 29 August 2025 21:08:25 +0000 (0:00:07.473) 0:00:45.733 ********* 2025-08-29 21:10:31.719460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.719471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.719481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.719525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719712 | orchestrator | 2025-08-29 21:10:31.719721 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-08-29 21:10:31.719731 | orchestrator | Friday 29 August 2025 21:08:47 +0000 (0:00:21.330) 0:01:07.064 ********* 2025-08-29 21:10:31.719741 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 21:10:31.719751 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 21:10:31.719761 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 21:10:31.719770 | orchestrator | 2025-08-29 21:10:31.719780 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-08-29 21:10:31.719790 | orchestrator | Friday 29 August 2025 21:08:54 +0000 (0:00:06.922) 0:01:13.986 ********* 2025-08-29 21:10:31.719799 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 21:10:31.719809 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 21:10:31.719818 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 21:10:31.719828 | orchestrator | 2025-08-29 21:10:31.719837 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-08-29 21:10:31.719847 | orchestrator | Friday 29 August 2025 21:08:57 +0000 (0:00:03.776) 0:01:17.763 ********* 2025-08-29 21:10:31.719857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.719881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.719916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.719928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.719949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.719966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.719977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.719987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720115 | orchestrator | 2025-08-29 21:10:31.720125 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-08-29 21:10:31.720135 | orchestrator | Friday 29 August 2025 21:09:00 +0000 (0:00:03.183) 0:01:20.947 ********* 2025-08-29 21:10:31.720152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.720163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.720177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.720193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720385 | orchestrator | 2025-08-29 21:10:31.720395 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 21:10:31.720405 | orchestrator | Friday 29 August 2025 21:09:03 +0000 (0:00:02.966) 0:01:23.913 ********* 2025-08-29 21:10:31.720415 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:31.720425 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:10:31.720435 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:10:31.720445 | orchestrator | 2025-08-29 21:10:31.720455 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-08-29 21:10:31.720464 | orchestrator | Friday 29 August 2025 21:09:04 +0000 (0:00:00.356) 0:01:24.270 ********* 2025-08-29 21:10:31.720474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.720484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:10:31.720495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720550 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:31.720560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.720571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:10:31.720581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720636 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:10:31.720646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 21:10:31.720657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 21:10:31.720667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 21:10:31.720725 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:10:31.720735 | orchestrator | 2025-08-29 21:10:31.720745 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-08-29 21:10:31.720755 | orchestrator | Friday 29 August 2025 21:09:05 +0000 (0:00:00.782) 0:01:25.052 ********* 2025-08-29 21:10:31.720765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.720776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.720794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 21:10:31.720811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.720998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 21:10:31.721008 | orchestrator | 2025-08-29 21:10:31.721017 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 21:10:31.721027 | orchestrator | Friday 29 August 2025 21:09:10 +0000 (0:00:05.372) 0:01:30.424 ********* 2025-08-29 21:10:31.721037 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:10:31.721047 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:10:31.721057 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:10:31.721066 | orchestrator | 2025-08-29 21:10:31.721076 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-08-29 21:10:31.721086 | orchestrator | Friday 29 August 2025 21:09:10 +0000 (0:00:00.388) 0:01:30.813 ********* 2025-08-29 21:10:31.721095 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-08-29 21:10:31.721105 | orchestrator | 2025-08-29 21:10:31.721114 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-08-29 21:10:31.721124 | orchestrator | Friday 29 August 2025 21:09:12 +0000 (0:00:02.015) 0:01:32.828 ********* 2025-08-29 21:10:31.721134 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 21:10:31.721144 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-08-29 21:10:31.721153 | orchestrator | 2025-08-29 21:10:31.721163 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-08-29 21:10:31.721172 | orchestrator | Friday 29 August 2025 21:09:15 +0000 (0:00:02.435) 0:01:35.264 ********* 2025-08-29 21:10:31.721182 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:31.721192 | orchestrator | 2025-08-29 21:10:31.721201 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 21:10:31.721211 | orchestrator | Friday 29 August 2025 21:09:30 +0000 (0:00:15.086) 0:01:50.350 ********* 2025-08-29 21:10:31.721220 | orchestrator | 2025-08-29 21:10:31.721230 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 21:10:31.721240 | orchestrator | Friday 29 August 2025 21:09:30 +0000 (0:00:00.086) 0:01:50.437 ********* 2025-08-29 21:10:31.721249 | orchestrator | 2025-08-29 21:10:31.721259 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 21:10:31.721269 | orchestrator | Friday 29 August 2025 21:09:30 +0000 (0:00:00.066) 0:01:50.504 ********* 2025-08-29 21:10:31.721278 | orchestrator | 2025-08-29 21:10:31.721293 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-08-29 21:10:31.721303 | orchestrator | Friday 29 August 2025 21:09:30 +0000 (0:00:00.150) 0:01:50.654 ********* 2025-08-29 21:10:31.721312 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:31.721322 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:10:31.721331 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:10:31.721341 | orchestrator | 2025-08-29 21:10:31.721351 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-08-29 21:10:31.721360 | orchestrator | Friday 29 August 2025 21:09:46 +0000 (0:00:15.366) 0:02:06.020 ********* 2025-08-29 21:10:31.721370 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:31.721380 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:10:31.721389 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:10:31.721399 | orchestrator | 2025-08-29 21:10:31.721408 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-08-29 21:10:31.721418 | orchestrator | Friday 29 August 2025 21:09:52 +0000 (0:00:06.802) 0:02:12.823 ********* 2025-08-29 21:10:31.721427 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:10:31.721437 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:10:31.721447 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:31.721456 | orchestrator | 2025-08-29 21:10:31.721466 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-08-29 21:10:31.721476 | orchestrator | Friday 29 August 2025 21:10:00 +0000 (0:00:07.977) 0:02:20.800 ********* 2025-08-29 21:10:31.721485 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:31.721495 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:10:31.721505 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:10:31.721514 | orchestrator | 2025-08-29 21:10:31.721524 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-08-29 21:10:31.721533 | orchestrator | Friday 29 August 2025 21:10:06 +0000 (0:00:05.668) 0:02:26.468 ********* 2025-08-29 21:10:31.721543 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:31.721552 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:10:31.721562 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:10:31.721572 | orchestrator | 2025-08-29 21:10:31.721585 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-08-29 21:10:31.721600 | orchestrator | Friday 29 August 2025 21:10:12 +0000 (0:00:05.989) 0:02:32.458 ********* 2025-08-29 21:10:31.721610 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:31.721620 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:10:31.721630 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:10:31.721639 | orchestrator | 2025-08-29 21:10:31.721649 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-08-29 21:10:31.721659 | orchestrator | Friday 29 August 2025 21:10:23 +0000 (0:00:11.133) 0:02:43.591 ********* 2025-08-29 21:10:31.721668 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:10:31.721678 | orchestrator | 2025-08-29 21:10:31.721688 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:10:31.721698 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 21:10:31.721708 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 21:10:31.721718 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 21:10:31.721728 | orchestrator | 2025-08-29 21:10:31.721737 | orchestrator | 2025-08-29 21:10:31.721747 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:10:31.721756 | orchestrator | Friday 29 August 2025 21:10:30 +0000 (0:00:06.709) 0:02:50.300 ********* 2025-08-29 21:10:31.721766 | orchestrator | =============================================================================== 2025-08-29 21:10:31.721775 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.33s 2025-08-29 21:10:31.721790 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 15.37s 2025-08-29 21:10:31.721799 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.09s 2025-08-29 21:10:31.721809 | orchestrator | designate : Restart designate-worker container ------------------------- 11.13s 2025-08-29 21:10:31.721819 | orchestrator | designate : Restart designate-central container ------------------------- 7.98s 2025-08-29 21:10:31.721828 | orchestrator | designate : Copying over config.json files for services ----------------- 7.47s 2025-08-29 21:10:31.721838 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.92s 2025-08-29 21:10:31.721847 | orchestrator | designate : Restart designate-api container ----------------------------- 6.80s 2025-08-29 21:10:31.721857 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.71s 2025-08-29 21:10:31.721866 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.49s 2025-08-29 21:10:31.721876 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.99s 2025-08-29 21:10:31.721906 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.92s 2025-08-29 21:10:31.721917 | orchestrator | designate : Restart designate-producer container ------------------------ 5.67s 2025-08-29 21:10:31.721927 | orchestrator | designate : Check designate containers ---------------------------------- 5.37s 2025-08-29 21:10:31.721936 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.19s 2025-08-29 21:10:31.721946 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.81s 2025-08-29 21:10:31.721955 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.78s 2025-08-29 21:10:31.721965 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.63s 2025-08-29 21:10:31.721974 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.25s 2025-08-29 21:10:31.721984 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.18s 2025-08-29 21:10:31.721993 | orchestrator | 2025-08-29 21:10:31 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:31.722003 | orchestrator | 2025-08-29 21:10:31 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:10:31.722037 | orchestrator | 2025-08-29 21:10:31 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:31.722049 | orchestrator | 2025-08-29 21:10:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:34.749127 | orchestrator | 2025-08-29 21:10:34 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:34.750511 | orchestrator | 2025-08-29 21:10:34 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:34.751997 | orchestrator | 2025-08-29 21:10:34 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:10:34.753291 | orchestrator | 2025-08-29 21:10:34 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:34.753329 | orchestrator | 2025-08-29 21:10:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:37.790665 | orchestrator | 2025-08-29 21:10:37 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:37.790765 | orchestrator | 2025-08-29 21:10:37 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:37.790781 | orchestrator | 2025-08-29 21:10:37 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:10:37.790793 | orchestrator | 2025-08-29 21:10:37 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:37.790805 | orchestrator | 2025-08-29 21:10:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:40.809127 | orchestrator | 2025-08-29 21:10:40 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:40.809216 | orchestrator | 2025-08-29 21:10:40 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:40.809691 | orchestrator | 2025-08-29 21:10:40 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:10:40.810257 | orchestrator | 2025-08-29 21:10:40 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:40.810279 | orchestrator | 2025-08-29 21:10:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:43.834122 | orchestrator | 2025-08-29 21:10:43 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:43.834754 | orchestrator | 2025-08-29 21:10:43 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:43.835311 | orchestrator | 2025-08-29 21:10:43 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:10:43.836091 | orchestrator | 2025-08-29 21:10:43 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:43.836157 | orchestrator | 2025-08-29 21:10:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:46.859184 | orchestrator | 2025-08-29 21:10:46 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:46.859282 | orchestrator | 2025-08-29 21:10:46 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:46.859984 | orchestrator | 2025-08-29 21:10:46 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:10:46.860450 | orchestrator | 2025-08-29 21:10:46 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:46.860473 | orchestrator | 2025-08-29 21:10:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:49.913800 | orchestrator | 2025-08-29 21:10:49 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:49.913935 | orchestrator | 2025-08-29 21:10:49 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:49.913951 | orchestrator | 2025-08-29 21:10:49 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:10:49.913962 | orchestrator | 2025-08-29 21:10:49 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:49.913974 | orchestrator | 2025-08-29 21:10:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:52.936984 | orchestrator | 2025-08-29 21:10:52 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:52.937087 | orchestrator | 2025-08-29 21:10:52 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:52.937109 | orchestrator | 2025-08-29 21:10:52 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:10:52.939317 | orchestrator | 2025-08-29 21:10:52 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:52.939359 | orchestrator | 2025-08-29 21:10:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:56.023122 | orchestrator | 2025-08-29 21:10:55 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:56.023211 | orchestrator | 2025-08-29 21:10:55 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:56.023227 | orchestrator | 2025-08-29 21:10:55 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:10:56.023263 | orchestrator | 2025-08-29 21:10:55 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:56.023275 | orchestrator | 2025-08-29 21:10:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:10:58.996373 | orchestrator | 2025-08-29 21:10:58 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:10:58.996445 | orchestrator | 2025-08-29 21:10:58 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:10:58.996709 | orchestrator | 2025-08-29 21:10:58 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:10:59.043373 | orchestrator | 2025-08-29 21:10:58 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:10:59.043495 | orchestrator | 2025-08-29 21:10:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:02.031573 | orchestrator | 2025-08-29 21:11:02 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:02.033973 | orchestrator | 2025-08-29 21:11:02 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:02.036280 | orchestrator | 2025-08-29 21:11:02 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:11:02.038443 | orchestrator | 2025-08-29 21:11:02 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:11:02.038865 | orchestrator | 2025-08-29 21:11:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:05.072631 | orchestrator | 2025-08-29 21:11:05 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:05.073703 | orchestrator | 2025-08-29 21:11:05 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:05.074682 | orchestrator | 2025-08-29 21:11:05 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state STARTED 2025-08-29 21:11:05.076012 | orchestrator | 2025-08-29 21:11:05 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:11:05.076250 | orchestrator | 2025-08-29 21:11:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:08.114757 | orchestrator | 2025-08-29 21:11:08 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:08.114958 | orchestrator | 2025-08-29 21:11:08 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:08.114979 | orchestrator | 2025-08-29 21:11:08 | INFO  | Task 32498dc7-c10a-4e22-9bc6-bd54e49a6b96 is in state SUCCESS 2025-08-29 21:11:08.115002 | orchestrator | 2025-08-29 21:11:08 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:11:08.115013 | orchestrator | 2025-08-29 21:11:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:11.142702 | orchestrator | 2025-08-29 21:11:11 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:11.143483 | orchestrator | 2025-08-29 21:11:11 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:11.144358 | orchestrator | 2025-08-29 21:11:11 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:11.145506 | orchestrator | 2025-08-29 21:11:11 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:11:11.145528 | orchestrator | 2025-08-29 21:11:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:14.179615 | orchestrator | 2025-08-29 21:11:14 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:14.179846 | orchestrator | 2025-08-29 21:11:14 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:14.180532 | orchestrator | 2025-08-29 21:11:14 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:14.182011 | orchestrator | 2025-08-29 21:11:14 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:11:14.182081 | orchestrator | 2025-08-29 21:11:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:17.227112 | orchestrator | 2025-08-29 21:11:17 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:17.227198 | orchestrator | 2025-08-29 21:11:17 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:17.229953 | orchestrator | 2025-08-29 21:11:17 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:17.230590 | orchestrator | 2025-08-29 21:11:17 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:11:17.230620 | orchestrator | 2025-08-29 21:11:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:20.251731 | orchestrator | 2025-08-29 21:11:20 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:20.253440 | orchestrator | 2025-08-29 21:11:20 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:20.255050 | orchestrator | 2025-08-29 21:11:20 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:20.256383 | orchestrator | 2025-08-29 21:11:20 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:11:20.256408 | orchestrator | 2025-08-29 21:11:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:23.297240 | orchestrator | 2025-08-29 21:11:23 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:23.298235 | orchestrator | 2025-08-29 21:11:23 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:23.299214 | orchestrator | 2025-08-29 21:11:23 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:23.299792 | orchestrator | 2025-08-29 21:11:23 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:11:23.299815 | orchestrator | 2025-08-29 21:11:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:26.339471 | orchestrator | 2025-08-29 21:11:26 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:26.343452 | orchestrator | 2025-08-29 21:11:26 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:26.346989 | orchestrator | 2025-08-29 21:11:26 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:26.350979 | orchestrator | 2025-08-29 21:11:26 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:11:26.351706 | orchestrator | 2025-08-29 21:11:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:29.392826 | orchestrator | 2025-08-29 21:11:29 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:29.393364 | orchestrator | 2025-08-29 21:11:29 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:29.394956 | orchestrator | 2025-08-29 21:11:29 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:29.395541 | orchestrator | 2025-08-29 21:11:29 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:11:29.395674 | orchestrator | 2025-08-29 21:11:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:32.437073 | orchestrator | 2025-08-29 21:11:32 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:32.438487 | orchestrator | 2025-08-29 21:11:32 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:32.439376 | orchestrator | 2025-08-29 21:11:32 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:32.440203 | orchestrator | 2025-08-29 21:11:32 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state STARTED 2025-08-29 21:11:32.440464 | orchestrator | 2025-08-29 21:11:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:35.485127 | orchestrator | 2025-08-29 21:11:35 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:35.485542 | orchestrator | 2025-08-29 21:11:35 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:11:35.486957 | orchestrator | 2025-08-29 21:11:35 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:35.489569 | orchestrator | 2025-08-29 21:11:35 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:35.494590 | orchestrator | 2025-08-29 21:11:35.494629 | orchestrator | 2025-08-29 21:11:35.494642 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:11:35.494654 | orchestrator | 2025-08-29 21:11:35.494666 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:11:35.494678 | orchestrator | Friday 29 August 2025 21:10:35 +0000 (0:00:00.227) 0:00:00.227 ********* 2025-08-29 21:11:35.494689 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:11:35.494701 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:11:35.494713 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:11:35.494724 | orchestrator | ok: [testbed-manager] 2025-08-29 21:11:35.494734 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:11:35.494745 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:11:35.494756 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:11:35.494767 | orchestrator | 2025-08-29 21:11:35.494778 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:11:35.494789 | orchestrator | Friday 29 August 2025 21:10:35 +0000 (0:00:00.691) 0:00:00.919 ********* 2025-08-29 21:11:35.494800 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-08-29 21:11:35.494812 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-08-29 21:11:35.494823 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-08-29 21:11:35.494834 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-08-29 21:11:35.495147 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-08-29 21:11:35.495160 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-08-29 21:11:35.495171 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-08-29 21:11:35.495182 | orchestrator | 2025-08-29 21:11:35.495212 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 21:11:35.495224 | orchestrator | 2025-08-29 21:11:35.495235 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-08-29 21:11:35.495246 | orchestrator | Friday 29 August 2025 21:10:36 +0000 (0:00:00.707) 0:00:01.627 ********* 2025-08-29 21:11:35.495258 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:11:35.495271 | orchestrator | 2025-08-29 21:11:35.495283 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-08-29 21:11:35.495294 | orchestrator | Friday 29 August 2025 21:10:38 +0000 (0:00:01.873) 0:00:03.501 ********* 2025-08-29 21:11:35.495304 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-08-29 21:11:35.495316 | orchestrator | 2025-08-29 21:11:35.495327 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-08-29 21:11:35.495361 | orchestrator | Friday 29 August 2025 21:10:41 +0000 (0:00:03.327) 0:00:06.828 ********* 2025-08-29 21:11:35.495374 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-08-29 21:11:35.495387 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-08-29 21:11:35.495398 | orchestrator | 2025-08-29 21:11:35.495409 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-08-29 21:11:35.495420 | orchestrator | Friday 29 August 2025 21:10:48 +0000 (0:00:06.766) 0:00:13.594 ********* 2025-08-29 21:11:35.495431 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 21:11:35.495442 | orchestrator | 2025-08-29 21:11:35.495452 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-08-29 21:11:35.495463 | orchestrator | Friday 29 August 2025 21:10:51 +0000 (0:00:03.161) 0:00:16.756 ********* 2025-08-29 21:11:35.495474 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 21:11:35.495485 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-08-29 21:11:35.495496 | orchestrator | 2025-08-29 21:11:35.495507 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-08-29 21:11:35.495517 | orchestrator | Friday 29 August 2025 21:10:56 +0000 (0:00:04.186) 0:00:20.943 ********* 2025-08-29 21:11:35.495528 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 21:11:35.495539 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-08-29 21:11:35.495550 | orchestrator | 2025-08-29 21:11:35.495561 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-08-29 21:11:35.495571 | orchestrator | Friday 29 August 2025 21:11:02 +0000 (0:00:06.426) 0:00:27.369 ********* 2025-08-29 21:11:35.495582 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-08-29 21:11:35.495593 | orchestrator | 2025-08-29 21:11:35.495603 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:11:35.495614 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:11:35.495625 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:11:35.495637 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:11:35.495648 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:11:35.495660 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:11:35.495681 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:11:35.495693 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:11:35.495703 | orchestrator | 2025-08-29 21:11:35.495714 | orchestrator | 2025-08-29 21:11:35.495725 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:11:35.495736 | orchestrator | Friday 29 August 2025 21:11:07 +0000 (0:00:04.664) 0:00:32.033 ********* 2025-08-29 21:11:35.495747 | orchestrator | =============================================================================== 2025-08-29 21:11:35.495761 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.77s 2025-08-29 21:11:35.495772 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.43s 2025-08-29 21:11:35.495785 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.66s 2025-08-29 21:11:35.495806 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.19s 2025-08-29 21:11:35.495819 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.33s 2025-08-29 21:11:35.495831 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.16s 2025-08-29 21:11:35.495843 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.87s 2025-08-29 21:11:35.495880 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-08-29 21:11:35.495897 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2025-08-29 21:11:35.495910 | orchestrator | 2025-08-29 21:11:35.495922 | orchestrator | 2025-08-29 21:11:35.495934 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:11:35.495946 | orchestrator | 2025-08-29 21:11:35.495958 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:11:35.495969 | orchestrator | Friday 29 August 2025 21:09:43 +0000 (0:00:00.443) 0:00:00.443 ********* 2025-08-29 21:11:35.495981 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:11:35.495993 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:11:35.496004 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:11:35.496016 | orchestrator | 2025-08-29 21:11:35.496029 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:11:35.496041 | orchestrator | Friday 29 August 2025 21:09:44 +0000 (0:00:00.512) 0:00:00.955 ********* 2025-08-29 21:11:35.496052 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-08-29 21:11:35.496065 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-08-29 21:11:35.496077 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-08-29 21:11:35.496089 | orchestrator | 2025-08-29 21:11:35.496101 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-08-29 21:11:35.496114 | orchestrator | 2025-08-29 21:11:35.496126 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 21:11:35.496138 | orchestrator | Friday 29 August 2025 21:09:44 +0000 (0:00:00.444) 0:00:01.400 ********* 2025-08-29 21:11:35.496148 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:11:35.496159 | orchestrator | 2025-08-29 21:11:35.496170 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-08-29 21:11:35.496181 | orchestrator | Friday 29 August 2025 21:09:45 +0000 (0:00:00.413) 0:00:01.813 ********* 2025-08-29 21:11:35.496191 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-08-29 21:11:35.496202 | orchestrator | 2025-08-29 21:11:35.496213 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-08-29 21:11:35.496224 | orchestrator | Friday 29 August 2025 21:09:49 +0000 (0:00:03.938) 0:00:05.751 ********* 2025-08-29 21:11:35.496234 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-08-29 21:11:35.496245 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-08-29 21:11:35.496256 | orchestrator | 2025-08-29 21:11:35.496267 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-08-29 21:11:35.496278 | orchestrator | Friday 29 August 2025 21:09:55 +0000 (0:00:06.684) 0:00:12.436 ********* 2025-08-29 21:11:35.496289 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 21:11:35.496300 | orchestrator | 2025-08-29 21:11:35.496310 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-08-29 21:11:35.496321 | orchestrator | Friday 29 August 2025 21:09:59 +0000 (0:00:03.285) 0:00:15.721 ********* 2025-08-29 21:11:35.496332 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 21:11:35.496343 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-08-29 21:11:35.496353 | orchestrator | 2025-08-29 21:11:35.496364 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-08-29 21:11:35.496383 | orchestrator | Friday 29 August 2025 21:10:02 +0000 (0:00:03.936) 0:00:19.658 ********* 2025-08-29 21:11:35.496394 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 21:11:35.496405 | orchestrator | 2025-08-29 21:11:35.496416 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-08-29 21:11:35.496427 | orchestrator | Friday 29 August 2025 21:10:06 +0000 (0:00:03.368) 0:00:23.026 ********* 2025-08-29 21:11:35.496437 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-08-29 21:11:35.496448 | orchestrator | 2025-08-29 21:11:35.496459 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-08-29 21:11:35.496470 | orchestrator | Friday 29 August 2025 21:10:10 +0000 (0:00:04.137) 0:00:27.163 ********* 2025-08-29 21:11:35.496480 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:11:35.496491 | orchestrator | 2025-08-29 21:11:35.496502 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-08-29 21:11:35.496521 | orchestrator | Friday 29 August 2025 21:10:14 +0000 (0:00:03.731) 0:00:30.894 ********* 2025-08-29 21:11:35.496532 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:11:35.496543 | orchestrator | 2025-08-29 21:11:35.496554 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-08-29 21:11:35.496564 | orchestrator | Friday 29 August 2025 21:10:18 +0000 (0:00:04.274) 0:00:35.169 ********* 2025-08-29 21:11:35.496575 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:11:35.496586 | orchestrator | 2025-08-29 21:11:35.496597 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-08-29 21:11:35.496608 | orchestrator | Friday 29 August 2025 21:10:22 +0000 (0:00:03.736) 0:00:38.906 ********* 2025-08-29 21:11:35.496627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.496643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.496655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.496688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.496709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.496726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.496738 | orchestrator | 2025-08-29 21:11:35.496749 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-08-29 21:11:35.496760 | orchestrator | Friday 29 August 2025 21:10:23 +0000 (0:00:01.475) 0:00:40.381 ********* 2025-08-29 21:11:35.496771 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:11:35.496782 | orchestrator | 2025-08-29 21:11:35.496793 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-08-29 21:11:35.496804 | orchestrator | Friday 29 August 2025 21:10:23 +0000 (0:00:00.107) 0:00:40.488 ********* 2025-08-29 21:11:35.496815 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:11:35.496826 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:11:35.496837 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:11:35.496847 | orchestrator | 2025-08-29 21:11:35.496891 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-08-29 21:11:35.496902 | orchestrator | Friday 29 August 2025 21:10:24 +0000 (0:00:00.374) 0:00:40.863 ********* 2025-08-29 21:11:35.496913 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 21:11:35.496924 | orchestrator | 2025-08-29 21:11:35.496935 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-08-29 21:11:35.496945 | orchestrator | Friday 29 August 2025 21:10:24 +0000 (0:00:00.761) 0:00:41.625 ********* 2025-08-29 21:11:35.496965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.496977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.496996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.497013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497055 | orchestrator | 2025-08-29 21:11:35.497066 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-08-29 21:11:35.497077 | orchestrator | Friday 29 August 2025 21:10:27 +0000 (0:00:02.373) 0:00:43.998 ********* 2025-08-29 21:11:35.497088 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:11:35.497099 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:11:35.497110 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:11:35.497120 | orchestrator | 2025-08-29 21:11:35.497131 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 21:11:35.497142 | orchestrator | Friday 29 August 2025 21:10:27 +0000 (0:00:00.401) 0:00:44.399 ********* 2025-08-29 21:11:35.497153 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:11:35.497164 | orchestrator | 2025-08-29 21:11:35.497175 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-08-29 21:11:35.497185 | orchestrator | Friday 29 August 2025 21:10:28 +0000 (0:00:00.571) 0:00:44.971 ********* 2025-08-29 21:11:35.497205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.497222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.497249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.497261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497302 | orchestrator | 2025-08-29 21:11:35.497313 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-08-29 21:11:35.497324 | orchestrator | Friday 29 August 2025 21:10:30 +0000 (0:00:02.423) 0:00:47.394 ********* 2025-08-29 21:11:35.497342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:11:35.497364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:11:35.497375 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:11:35.497387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:11:35.497405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:11:35.497416 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:11:35.497428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:11:35.497444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:11:35.497463 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:11:35.497474 | orchestrator | 2025-08-29 21:11:35.497485 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-08-29 21:11:35.497496 | orchestrator | Friday 29 August 2025 21:10:31 +0000 (0:00:00.637) 0:00:48.031 ********* 2025-08-29 21:11:35.497508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:11:35.497519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:11:35.497531 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:11:35.497551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:11:35.497568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:11:35.497586 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:11:35.497598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:11:35.497609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:11:35.497621 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:11:35.497631 | orchestrator | 2025-08-29 21:11:35.497642 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-08-29 21:11:35.497653 | orchestrator | Friday 29 August 2025 21:10:32 +0000 (0:00:01.662) 0:00:49.694 ********* 2025-08-29 21:11:35.497665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.497683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.497706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.497750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497787 | orchestrator | 2025-08-29 21:11:35.497803 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-08-29 21:11:35.497815 | orchestrator | Friday 29 August 2025 21:10:35 +0000 (0:00:02.603) 0:00:52.297 ********* 2025-08-29 21:11:35.497827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.497912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.497928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.497940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.497991 | orchestrator | 2025-08-29 21:11:35.498002 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-08-29 21:11:35.498066 | orchestrator | Friday 29 August 2025 21:10:41 +0000 (0:00:05.561) 0:00:57.859 ********* 2025-08-29 21:11:35.498082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:11:35.498094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:11:35.498106 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:11:35.498117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:11:35.498138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:11:35.498157 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:11:35.498174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 21:11:35.498185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:11:35.498197 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:11:35.498208 | orchestrator | 2025-08-29 21:11:35.498219 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-08-29 21:11:35.498229 | orchestrator | Friday 29 August 2025 21:10:42 +0000 (0:00:01.378) 0:00:59.237 ********* 2025-08-29 21:11:35.498241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.498259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.498281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 21:11:35.498298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.498309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.498321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:11:35.498332 | orchestrator | 2025-08-29 21:11:35.498343 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 21:11:35.498354 | orchestrator | Friday 29 August 2025 21:10:45 +0000 (0:00:03.473) 0:01:02.711 ********* 2025-08-29 21:11:35.498366 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:11:35.498377 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:11:35.498395 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:11:35.498406 | orchestrator | 2025-08-29 21:11:35.498417 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-08-29 21:11:35.498428 | orchestrator | Friday 29 August 2025 21:10:46 +0000 (0:00:00.321) 0:01:03.032 ********* 2025-08-29 21:11:35.498438 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:11:35.498448 | orchestrator | 2025-08-29 21:11:35.498457 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-08-29 21:11:35.498467 | orchestrator | Friday 29 August 2025 21:10:48 +0000 (0:00:02.018) 0:01:05.051 ********* 2025-08-29 21:11:35.498477 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:11:35.498486 | orchestrator | 2025-08-29 21:11:35.498496 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-08-29 21:11:35.498511 | orchestrator | Friday 29 August 2025 21:10:50 +0000 (0:00:02.165) 0:01:07.216 ********* 2025-08-29 21:11:35.498521 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:11:35.498530 | orchestrator | 2025-08-29 21:11:35.498540 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 21:11:35.498549 | orchestrator | Friday 29 August 2025 21:11:06 +0000 (0:00:16.388) 0:01:23.605 ********* 2025-08-29 21:11:35.498559 | orchestrator | 2025-08-29 21:11:35.498569 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 21:11:35.498578 | orchestrator | Friday 29 August 2025 21:11:06 +0000 (0:00:00.059) 0:01:23.665 ********* 2025-08-29 21:11:35.498588 | orchestrator | 2025-08-29 21:11:35.498597 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 21:11:35.498607 | orchestrator | Friday 29 August 2025 21:11:07 +0000 (0:00:00.059) 0:01:23.724 ********* 2025-08-29 21:11:35.498616 | orchestrator | 2025-08-29 21:11:35.498626 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-08-29 21:11:35.498635 | orchestrator | Friday 29 August 2025 21:11:07 +0000 (0:00:00.063) 0:01:23.787 ********* 2025-08-29 21:11:35.498645 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:11:35.498655 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:11:35.498664 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:11:35.498674 | orchestrator | 2025-08-29 21:11:35.498684 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-08-29 21:11:35.498693 | orchestrator | Friday 29 August 2025 21:11:22 +0000 (0:00:15.278) 0:01:39.066 ********* 2025-08-29 21:11:35.498703 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:11:35.498713 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:11:35.498722 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:11:35.498732 | orchestrator | 2025-08-29 21:11:35.498746 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:11:35.498756 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 21:11:35.498766 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 21:11:35.498776 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 21:11:35.498786 | orchestrator | 2025-08-29 21:11:35.498795 | orchestrator | 2025-08-29 21:11:35.498805 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:11:35.498815 | orchestrator | Friday 29 August 2025 21:11:33 +0000 (0:00:11.126) 0:01:50.192 ********* 2025-08-29 21:11:35.498824 | orchestrator | =============================================================================== 2025-08-29 21:11:35.498834 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.39s 2025-08-29 21:11:35.498843 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.28s 2025-08-29 21:11:35.498867 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.13s 2025-08-29 21:11:35.498884 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.68s 2025-08-29 21:11:35.498893 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.56s 2025-08-29 21:11:35.498903 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.27s 2025-08-29 21:11:35.498912 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.14s 2025-08-29 21:11:35.498922 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.94s 2025-08-29 21:11:35.498932 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.94s 2025-08-29 21:11:35.498941 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.74s 2025-08-29 21:11:35.498950 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.73s 2025-08-29 21:11:35.498960 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.47s 2025-08-29 21:11:35.498970 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.37s 2025-08-29 21:11:35.498979 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.29s 2025-08-29 21:11:35.498989 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.60s 2025-08-29 21:11:35.498998 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.42s 2025-08-29 21:11:35.499008 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.37s 2025-08-29 21:11:35.499017 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.17s 2025-08-29 21:11:35.499027 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.02s 2025-08-29 21:11:35.499036 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.66s 2025-08-29 21:11:35.499046 | orchestrator | 2025-08-29 21:11:35 | INFO  | Task 0e377e40-74f4-4960-bc39-3bc6e51ac5e3 is in state SUCCESS 2025-08-29 21:11:35.499056 | orchestrator | 2025-08-29 21:11:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:38.535434 | orchestrator | 2025-08-29 21:11:38 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:38.537010 | orchestrator | 2025-08-29 21:11:38 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:11:38.537993 | orchestrator | 2025-08-29 21:11:38 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:38.538958 | orchestrator | 2025-08-29 21:11:38 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:38.539555 | orchestrator | 2025-08-29 21:11:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:41.563549 | orchestrator | 2025-08-29 21:11:41 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:41.566413 | orchestrator | 2025-08-29 21:11:41 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:11:41.567506 | orchestrator | 2025-08-29 21:11:41 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:41.568430 | orchestrator | 2025-08-29 21:11:41 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:41.568605 | orchestrator | 2025-08-29 21:11:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:44.588307 | orchestrator | 2025-08-29 21:11:44 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:44.588580 | orchestrator | 2025-08-29 21:11:44 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:11:44.589458 | orchestrator | 2025-08-29 21:11:44 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:44.590645 | orchestrator | 2025-08-29 21:11:44 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:44.590706 | orchestrator | 2025-08-29 21:11:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:47.620444 | orchestrator | 2025-08-29 21:11:47 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:47.622219 | orchestrator | 2025-08-29 21:11:47 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:11:47.623986 | orchestrator | 2025-08-29 21:11:47 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:47.626411 | orchestrator | 2025-08-29 21:11:47 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:47.626490 | orchestrator | 2025-08-29 21:11:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:50.656663 | orchestrator | 2025-08-29 21:11:50 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:50.658306 | orchestrator | 2025-08-29 21:11:50 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:11:50.658821 | orchestrator | 2025-08-29 21:11:50 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:50.659721 | orchestrator | 2025-08-29 21:11:50 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:50.659750 | orchestrator | 2025-08-29 21:11:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:53.688761 | orchestrator | 2025-08-29 21:11:53 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:53.691421 | orchestrator | 2025-08-29 21:11:53 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:11:53.692826 | orchestrator | 2025-08-29 21:11:53 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:53.695209 | orchestrator | 2025-08-29 21:11:53 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:53.695242 | orchestrator | 2025-08-29 21:11:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:56.730459 | orchestrator | 2025-08-29 21:11:56 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:56.731396 | orchestrator | 2025-08-29 21:11:56 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:11:56.732696 | orchestrator | 2025-08-29 21:11:56 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:56.735013 | orchestrator | 2025-08-29 21:11:56 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:56.735045 | orchestrator | 2025-08-29 21:11:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:11:59.796239 | orchestrator | 2025-08-29 21:11:59 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:11:59.796317 | orchestrator | 2025-08-29 21:11:59 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:11:59.796331 | orchestrator | 2025-08-29 21:11:59 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:11:59.799924 | orchestrator | 2025-08-29 21:11:59 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:11:59.799953 | orchestrator | 2025-08-29 21:11:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:02.837553 | orchestrator | 2025-08-29 21:12:02 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:02.839163 | orchestrator | 2025-08-29 21:12:02 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:02.840712 | orchestrator | 2025-08-29 21:12:02 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:12:02.842515 | orchestrator | 2025-08-29 21:12:02 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:02.842544 | orchestrator | 2025-08-29 21:12:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:05.882409 | orchestrator | 2025-08-29 21:12:05 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:05.882492 | orchestrator | 2025-08-29 21:12:05 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:05.883327 | orchestrator | 2025-08-29 21:12:05 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:12:05.884263 | orchestrator | 2025-08-29 21:12:05 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:05.884286 | orchestrator | 2025-08-29 21:12:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:08.908311 | orchestrator | 2025-08-29 21:12:08 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:08.909168 | orchestrator | 2025-08-29 21:12:08 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:08.909748 | orchestrator | 2025-08-29 21:12:08 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:12:08.910459 | orchestrator | 2025-08-29 21:12:08 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:08.910609 | orchestrator | 2025-08-29 21:12:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:11.939902 | orchestrator | 2025-08-29 21:12:11 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:11.941520 | orchestrator | 2025-08-29 21:12:11 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:11.941916 | orchestrator | 2025-08-29 21:12:11 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:12:11.943490 | orchestrator | 2025-08-29 21:12:11 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:11.943572 | orchestrator | 2025-08-29 21:12:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:14.969765 | orchestrator | 2025-08-29 21:12:14 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:14.970143 | orchestrator | 2025-08-29 21:12:14 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:14.970870 | orchestrator | 2025-08-29 21:12:14 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state STARTED 2025-08-29 21:12:14.971906 | orchestrator | 2025-08-29 21:12:14 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:14.971931 | orchestrator | 2025-08-29 21:12:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:17.993247 | orchestrator | 2025-08-29 21:12:17 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:17.993623 | orchestrator | 2025-08-29 21:12:17 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:17.995402 | orchestrator | 2025-08-29 21:12:17.995429 | orchestrator | 2025-08-29 21:12:17 | INFO  | Task bc9bb6b2-9694-4e4e-88c2-f39e3a01b58a is in state SUCCESS 2025-08-29 21:12:17.996762 | orchestrator | 2025-08-29 21:12:17.996864 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:12:17.996882 | orchestrator | 2025-08-29 21:12:17.996895 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:12:17.996932 | orchestrator | Friday 29 August 2025 21:07:40 +0000 (0:00:00.259) 0:00:00.259 ********* 2025-08-29 21:12:17.997045 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:12:17.997621 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:12:17.997636 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:12:17.997647 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:12:17.997658 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:12:17.997669 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:12:17.997680 | orchestrator | 2025-08-29 21:12:17.997691 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:12:17.997702 | orchestrator | Friday 29 August 2025 21:07:40 +0000 (0:00:00.683) 0:00:00.942 ********* 2025-08-29 21:12:17.997713 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-08-29 21:12:17.997725 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-08-29 21:12:17.997736 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-08-29 21:12:17.997747 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-08-29 21:12:17.997757 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-08-29 21:12:17.997768 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-08-29 21:12:17.997779 | orchestrator | 2025-08-29 21:12:17.997790 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-08-29 21:12:17.997801 | orchestrator | 2025-08-29 21:12:17.997812 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 21:12:17.997823 | orchestrator | Friday 29 August 2025 21:07:41 +0000 (0:00:00.501) 0:00:01.444 ********* 2025-08-29 21:12:17.997871 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:12:17.997884 | orchestrator | 2025-08-29 21:12:17.997894 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-08-29 21:12:17.997905 | orchestrator | Friday 29 August 2025 21:07:42 +0000 (0:00:00.846) 0:00:02.291 ********* 2025-08-29 21:12:17.997916 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:12:17.997927 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:12:17.997938 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:12:17.997949 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:12:17.997959 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:12:17.997970 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:12:17.997981 | orchestrator | 2025-08-29 21:12:17.997992 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-08-29 21:12:17.998060 | orchestrator | Friday 29 August 2025 21:07:43 +0000 (0:00:01.091) 0:00:03.383 ********* 2025-08-29 21:12:17.998076 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:12:17.998087 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:12:17.998098 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:12:17.998109 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:12:17.998120 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:12:17.998130 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:12:17.998141 | orchestrator | 2025-08-29 21:12:17.998152 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-08-29 21:12:17.998163 | orchestrator | Friday 29 August 2025 21:07:44 +0000 (0:00:01.011) 0:00:04.394 ********* 2025-08-29 21:12:17.998174 | orchestrator | ok: [testbed-node-0] => { 2025-08-29 21:12:17.998185 | orchestrator |  "changed": false, 2025-08-29 21:12:17.998196 | orchestrator |  "msg": "All assertions passed" 2025-08-29 21:12:17.998207 | orchestrator | } 2025-08-29 21:12:17.998220 | orchestrator | ok: [testbed-node-1] => { 2025-08-29 21:12:17.998232 | orchestrator |  "changed": false, 2025-08-29 21:12:17.998243 | orchestrator |  "msg": "All assertions passed" 2025-08-29 21:12:17.998255 | orchestrator | } 2025-08-29 21:12:17.998267 | orchestrator | ok: [testbed-node-2] => { 2025-08-29 21:12:17.998281 | orchestrator |  "changed": false, 2025-08-29 21:12:17.998293 | orchestrator |  "msg": "All assertions passed" 2025-08-29 21:12:17.998305 | orchestrator | } 2025-08-29 21:12:17.998317 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 21:12:17.998340 | orchestrator |  "changed": false, 2025-08-29 21:12:17.998352 | orchestrator |  "msg": "All assertions passed" 2025-08-29 21:12:17.998364 | orchestrator | } 2025-08-29 21:12:17.998376 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 21:12:17.998388 | orchestrator |  "changed": false, 2025-08-29 21:12:17.998400 | orchestrator |  "msg": "All assertions passed" 2025-08-29 21:12:17.998412 | orchestrator | } 2025-08-29 21:12:17.998425 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 21:12:17.998437 | orchestrator |  "changed": false, 2025-08-29 21:12:17.998449 | orchestrator |  "msg": "All assertions passed" 2025-08-29 21:12:17.998461 | orchestrator | } 2025-08-29 21:12:17.998473 | orchestrator | 2025-08-29 21:12:17.998484 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-08-29 21:12:17.998495 | orchestrator | Friday 29 August 2025 21:07:45 +0000 (0:00:00.653) 0:00:05.048 ********* 2025-08-29 21:12:17.998506 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:17.998517 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:17.998527 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:17.998541 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:17.998553 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:17.998564 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:17.998575 | orchestrator | 2025-08-29 21:12:17.998586 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-08-29 21:12:17.998597 | orchestrator | Friday 29 August 2025 21:07:45 +0000 (0:00:00.510) 0:00:05.559 ********* 2025-08-29 21:12:17.998608 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-08-29 21:12:17.998619 | orchestrator | 2025-08-29 21:12:17.998630 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-08-29 21:12:17.998641 | orchestrator | Friday 29 August 2025 21:07:48 +0000 (0:00:03.317) 0:00:08.877 ********* 2025-08-29 21:12:17.998652 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-08-29 21:12:17.998664 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-08-29 21:12:17.998675 | orchestrator | 2025-08-29 21:12:17.998737 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-08-29 21:12:17.998751 | orchestrator | Friday 29 August 2025 21:07:55 +0000 (0:00:06.269) 0:00:15.146 ********* 2025-08-29 21:12:17.998762 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 21:12:17.998773 | orchestrator | 2025-08-29 21:12:17.998784 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-08-29 21:12:17.998795 | orchestrator | Friday 29 August 2025 21:07:58 +0000 (0:00:03.112) 0:00:18.259 ********* 2025-08-29 21:12:17.998805 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 21:12:17.998816 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-08-29 21:12:17.998843 | orchestrator | 2025-08-29 21:12:17.998855 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-08-29 21:12:17.998866 | orchestrator | Friday 29 August 2025 21:08:02 +0000 (0:00:03.876) 0:00:22.135 ********* 2025-08-29 21:12:17.998877 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 21:12:17.998888 | orchestrator | 2025-08-29 21:12:17.998898 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-08-29 21:12:17.998909 | orchestrator | Friday 29 August 2025 21:08:05 +0000 (0:00:03.219) 0:00:25.355 ********* 2025-08-29 21:12:17.998920 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-08-29 21:12:17.998931 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-08-29 21:12:17.998941 | orchestrator | 2025-08-29 21:12:17.998952 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 21:12:17.998963 | orchestrator | Friday 29 August 2025 21:08:13 +0000 (0:00:08.110) 0:00:33.465 ********* 2025-08-29 21:12:17.998974 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:17.998992 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:17.999003 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:17.999014 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:17.999024 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:17.999035 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:17.999046 | orchestrator | 2025-08-29 21:12:17.999056 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-08-29 21:12:17.999067 | orchestrator | Friday 29 August 2025 21:08:14 +0000 (0:00:00.700) 0:00:34.166 ********* 2025-08-29 21:12:17.999078 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:17.999089 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:17.999100 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:17.999110 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:17.999121 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:17.999131 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:17.999142 | orchestrator | 2025-08-29 21:12:17.999153 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-08-29 21:12:17.999169 | orchestrator | Friday 29 August 2025 21:08:16 +0000 (0:00:02.126) 0:00:36.293 ********* 2025-08-29 21:12:17.999180 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:12:17.999191 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:12:17.999202 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:12:17.999233 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:12:17.999245 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:12:17.999256 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:12:17.999266 | orchestrator | 2025-08-29 21:12:17.999277 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 21:12:17.999288 | orchestrator | Friday 29 August 2025 21:08:17 +0000 (0:00:01.198) 0:00:37.491 ********* 2025-08-29 21:12:17.999299 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:17.999310 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:17.999321 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:17.999332 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:17.999342 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:17.999353 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:17.999364 | orchestrator | 2025-08-29 21:12:17.999374 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-08-29 21:12:17.999385 | orchestrator | Friday 29 August 2025 21:08:20 +0000 (0:00:02.846) 0:00:40.338 ********* 2025-08-29 21:12:17.999399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:17.999452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:17.999482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:17.999499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:17.999511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:17.999523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:17.999534 | orchestrator | 2025-08-29 21:12:17.999545 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-08-29 21:12:17.999557 | orchestrator | Friday 29 August 2025 21:08:23 +0000 (0:00:03.126) 0:00:43.465 ********* 2025-08-29 21:12:17.999568 | orchestrator | [WARNING]: Skipped 2025-08-29 21:12:17.999580 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-08-29 21:12:17.999591 | orchestrator | due to this access issue: 2025-08-29 21:12:17.999602 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-08-29 21:12:17.999619 | orchestrator | a directory 2025-08-29 21:12:17.999630 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 21:12:17.999641 | orchestrator | 2025-08-29 21:12:17.999683 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 21:12:17.999696 | orchestrator | Friday 29 August 2025 21:08:24 +0000 (0:00:00.905) 0:00:44.370 ********* 2025-08-29 21:12:17.999707 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:12:17.999719 | orchestrator | 2025-08-29 21:12:17.999731 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-08-29 21:12:17.999742 | orchestrator | Friday 29 August 2025 21:08:25 +0000 (0:00:01.127) 0:00:45.497 ********* 2025-08-29 21:12:17.999753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:17.999766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:17.999777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:17.999893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:17.999965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:17.999979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:17.999991 | orchestrator | 2025-08-29 21:12:18.000002 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-08-29 21:12:18.000015 | orchestrator | Friday 29 August 2025 21:08:29 +0000 (0:00:04.336) 0:00:49.834 ********* 2025-08-29 21:12:18.000031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.000044 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.000056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.000068 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.000114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.000128 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.000140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.000152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.000163 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.000179 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.000191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.000202 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.000213 | orchestrator | 2025-08-29 21:12:18.000225 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-08-29 21:12:18.000236 | orchestrator | Friday 29 August 2025 21:08:32 +0000 (0:00:03.068) 0:00:52.902 ********* 2025-08-29 21:12:18.000247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.000265 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.000308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.000322 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.000333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.000345 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.000360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.000372 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.000383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.000402 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.000413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.000425 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.000435 | orchestrator | 2025-08-29 21:12:18.000446 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-08-29 21:12:18.000463 | orchestrator | Friday 29 August 2025 21:08:35 +0000 (0:00:03.068) 0:00:55.971 ********* 2025-08-29 21:12:18.000475 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.000486 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.000497 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.000508 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.000519 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.000530 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.000541 | orchestrator | 2025-08-29 21:12:18.000552 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-08-29 21:12:18.000563 | orchestrator | Friday 29 August 2025 21:08:39 +0000 (0:00:03.043) 0:00:59.017 ********* 2025-08-29 21:12:18.000574 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.000585 | orchestrator | 2025-08-29 21:12:18.000596 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-08-29 21:12:18.000607 | orchestrator | Friday 29 August 2025 21:08:39 +0000 (0:00:00.201) 0:00:59.218 ********* 2025-08-29 21:12:18.000618 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.000629 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.000640 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.000651 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.000661 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.000672 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.000683 | orchestrator | 2025-08-29 21:12:18.000694 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-08-29 21:12:18.000705 | orchestrator | Friday 29 August 2025 21:08:40 +0000 (0:00:00.809) 0:01:00.028 ********* 2025-08-29 21:12:18.000721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.000739 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.000751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.000763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.000785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.000797 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.000808 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.000819 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.000891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.000904 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.000921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.000939 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.000950 | orchestrator | 2025-08-29 21:12:18.000961 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-08-29 21:12:18.000972 | orchestrator | Friday 29 August 2025 21:08:43 +0000 (0:00:03.099) 0:01:03.127 ********* 2025-08-29 21:12:18.000983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:18.001026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:18.001062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:18.001073 | orchestrator | 2025-08-29 21:12:18.001084 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-08-29 21:12:18.001095 | orchestrator | Friday 29 August 2025 21:08:47 +0000 (0:00:04.055) 0:01:07.183 ********* 2025-08-29 21:12:18.001113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:18.001175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:18.001192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:18.001204 | orchestrator | 2025-08-29 21:12:18.001215 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-08-29 21:12:18.001234 | orchestrator | Friday 29 August 2025 21:08:54 +0000 (0:00:06.923) 0:01:14.106 ********* 2025-08-29 21:12:18.001254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.001281 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.001305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.001320 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.001330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.001340 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.001351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001396 | orchestrator | 2025-08-29 21:12:18.001406 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-08-29 21:12:18.001416 | orchestrator | Friday 29 August 2025 21:08:57 +0000 (0:00:03.813) 0:01:17.920 ********* 2025-08-29 21:12:18.001426 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.001436 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.001445 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.001455 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:12:18.001464 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:12:18.001474 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:12:18.001484 | orchestrator | 2025-08-29 21:12:18.001497 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-08-29 21:12:18.001507 | orchestrator | Friday 29 August 2025 21:09:01 +0000 (0:00:03.252) 0:01:21.173 ********* 2025-08-29 21:12:18.001517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.001527 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.001537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.001548 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.001562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.001578 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.001588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.001623 | orchestrator | 2025-08-29 21:12:18.001633 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-08-29 21:12:18.001643 | orchestrator | Friday 29 August 2025 21:09:04 +0000 (0:00:03.712) 0:01:24.885 ********* 2025-08-29 21:12:18.001653 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.001663 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.001672 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.001682 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.001691 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.001701 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.001710 | orchestrator | 2025-08-29 21:12:18.001720 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-08-29 21:12:18.001730 | orchestrator | Friday 29 August 2025 21:09:07 +0000 (0:00:02.217) 0:01:27.103 ********* 2025-08-29 21:12:18.001739 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.001749 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.001758 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.001774 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.001784 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.001793 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.001803 | orchestrator | 2025-08-29 21:12:18.001813 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-08-29 21:12:18.001822 | orchestrator | Friday 29 August 2025 21:09:09 +0000 (0:00:02.020) 0:01:29.123 ********* 2025-08-29 21:12:18.001856 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.001874 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.001891 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.001914 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.001924 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.001934 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.001944 | orchestrator | 2025-08-29 21:12:18.001953 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-08-29 21:12:18.001963 | orchestrator | Friday 29 August 2025 21:09:11 +0000 (0:00:02.077) 0:01:31.201 ********* 2025-08-29 21:12:18.001973 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.001983 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.001992 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.002002 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.002011 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.002067 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.002077 | orchestrator | 2025-08-29 21:12:18.002086 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-08-29 21:12:18.002096 | orchestrator | Friday 29 August 2025 21:09:13 +0000 (0:00:02.025) 0:01:33.226 ********* 2025-08-29 21:12:18.002106 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.002116 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.002125 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.002135 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.002144 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.002154 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.002163 | orchestrator | 2025-08-29 21:12:18.002173 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-08-29 21:12:18.002183 | orchestrator | Friday 29 August 2025 21:09:14 +0000 (0:00:01.667) 0:01:34.894 ********* 2025-08-29 21:12:18.002192 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.002202 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.002212 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.002222 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.002231 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.002241 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.002250 | orchestrator | 2025-08-29 21:12:18.002260 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-08-29 21:12:18.002270 | orchestrator | Friday 29 August 2025 21:09:17 +0000 (0:00:02.198) 0:01:37.092 ********* 2025-08-29 21:12:18.002279 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 21:12:18.002289 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.002299 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 21:12:18.002309 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.002318 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 21:12:18.002328 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.002343 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 21:12:18.002353 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.002363 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 21:12:18.002372 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.002382 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 21:12:18.002399 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.002409 | orchestrator | 2025-08-29 21:12:18.002419 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-08-29 21:12:18.002429 | orchestrator | Friday 29 August 2025 21:09:18 +0000 (0:00:01.720) 0:01:38.812 ********* 2025-08-29 21:12:18.002439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.002449 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.002466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.002476 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.002487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.002497 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.002510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.002526 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.002537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.002547 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.002557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.002567 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.002576 | orchestrator | 2025-08-29 21:12:18.002586 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-08-29 21:12:18.002596 | orchestrator | Friday 29 August 2025 21:09:20 +0000 (0:00:01.794) 0:01:40.607 ********* 2025-08-29 21:12:18.002611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.002622 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.002632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.002642 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.002671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.002681 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.002691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.002702 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.002712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.002722 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.002738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.002748 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.002758 | orchestrator | 2025-08-29 21:12:18.002767 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-08-29 21:12:18.002777 | orchestrator | Friday 29 August 2025 21:09:22 +0000 (0:00:02.278) 0:01:42.886 ********* 2025-08-29 21:12:18.002787 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.002797 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.002807 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.002816 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.002853 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.002864 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.002873 | orchestrator | 2025-08-29 21:12:18.002883 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-08-29 21:12:18.002892 | orchestrator | Friday 29 August 2025 21:09:24 +0000 (0:00:01.760) 0:01:44.646 ********* 2025-08-29 21:12:18.002902 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.002912 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.002921 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.002930 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:12:18.002940 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:12:18.002949 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:12:18.002959 | orchestrator | 2025-08-29 21:12:18.002968 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-08-29 21:12:18.002978 | orchestrator | Friday 29 August 2025 21:09:28 +0000 (0:00:03.353) 0:01:47.999 ********* 2025-08-29 21:12:18.002987 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.002997 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.003007 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.003020 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.003030 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.003039 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.003049 | orchestrator | 2025-08-29 21:12:18.003059 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-08-29 21:12:18.003069 | orchestrator | Friday 29 August 2025 21:09:29 +0000 (0:00:01.856) 0:01:49.856 ********* 2025-08-29 21:12:18.003079 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.003088 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.003098 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.003107 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.003117 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.003126 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.003136 | orchestrator | 2025-08-29 21:12:18.003146 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-08-29 21:12:18.003156 | orchestrator | Friday 29 August 2025 21:09:32 +0000 (0:00:02.981) 0:01:52.837 ********* 2025-08-29 21:12:18.003165 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.003175 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.003184 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.003194 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.003203 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.003213 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.003223 | orchestrator | 2025-08-29 21:12:18.003233 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-08-29 21:12:18.003242 | orchestrator | Friday 29 August 2025 21:09:35 +0000 (0:00:03.034) 0:01:55.872 ********* 2025-08-29 21:12:18.003252 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.003262 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.003271 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.003281 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.003290 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.003300 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.003309 | orchestrator | 2025-08-29 21:12:18.003319 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-08-29 21:12:18.003328 | orchestrator | Friday 29 August 2025 21:09:37 +0000 (0:00:01.860) 0:01:57.732 ********* 2025-08-29 21:12:18.003338 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.003348 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.003357 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.003367 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.003376 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.003386 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.003401 | orchestrator | 2025-08-29 21:12:18.003411 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-08-29 21:12:18.003420 | orchestrator | Friday 29 August 2025 21:09:40 +0000 (0:00:02.374) 0:02:00.106 ********* 2025-08-29 21:12:18.003430 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.003440 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.003449 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.003458 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.003468 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.003477 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.003487 | orchestrator | 2025-08-29 21:12:18.003496 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-08-29 21:12:18.003506 | orchestrator | Friday 29 August 2025 21:09:42 +0000 (0:00:02.331) 0:02:02.438 ********* 2025-08-29 21:12:18.003516 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.003531 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.003541 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.003551 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.003560 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.003570 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.003580 | orchestrator | 2025-08-29 21:12:18.003590 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-08-29 21:12:18.003600 | orchestrator | Friday 29 August 2025 21:09:44 +0000 (0:00:02.240) 0:02:04.678 ********* 2025-08-29 21:12:18.003609 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.003619 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.003629 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.003639 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.003648 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.003658 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.003668 | orchestrator | 2025-08-29 21:12:18.003677 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-08-29 21:12:18.003687 | orchestrator | Friday 29 August 2025 21:09:47 +0000 (0:00:02.320) 0:02:06.998 ********* 2025-08-29 21:12:18.003697 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 21:12:18.003707 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.003717 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 21:12:18.003727 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.003737 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 21:12:18.003747 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.003757 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 21:12:18.003766 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.003776 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 21:12:18.003786 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.003796 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 21:12:18.003806 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.003815 | orchestrator | 2025-08-29 21:12:18.003825 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-08-29 21:12:18.003885 | orchestrator | Friday 29 August 2025 21:09:49 +0000 (0:00:02.701) 0:02:09.700 ********* 2025-08-29 21:12:18.003900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.003918 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.003929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.003937 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.003952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.003961 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.003969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 21:12:18.003977 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.003988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.004002 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.004010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 21:12:18.004018 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.004026 | orchestrator | 2025-08-29 21:12:18.004034 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-08-29 21:12:18.004042 | orchestrator | Friday 29 August 2025 21:09:51 +0000 (0:00:01.773) 0:02:11.473 ********* 2025-08-29 21:12:18.004050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.004064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.004073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 21:12:18.004089 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:18.004098 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:18.004106 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 21:12:18.004114 | orchestrator | 2025-08-29 21:12:18.004122 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 21:12:18.004134 | orchestrator | Friday 29 August 2025 21:09:53 +0000 (0:00:02.234) 0:02:13.708 ********* 2025-08-29 21:12:18.004143 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:12:18.004151 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:12:18.004159 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:12:18.004167 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:12:18.004175 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:12:18.004182 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:12:18.004190 | orchestrator | 2025-08-29 21:12:18.004198 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-08-29 21:12:18.004206 | orchestrator | Friday 29 August 2025 21:09:54 +0000 (0:00:00.545) 0:02:14.253 ********* 2025-08-29 21:12:18.004214 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:12:18.004222 | orchestrator | 2025-08-29 21:12:18.004230 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-08-29 21:12:18.004238 | orchestrator | Friday 29 August 2025 21:09:56 +0000 (0:00:02.125) 0:02:16.379 ********* 2025-08-29 21:12:18.004246 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:12:18.004253 | orchestrator | 2025-08-29 21:12:18.004261 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-08-29 21:12:18.004269 | orchestrator | Friday 29 August 2025 21:09:58 +0000 (0:00:02.200) 0:02:18.580 ********* 2025-08-29 21:12:18.004277 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:12:18.004285 | orchestrator | 2025-08-29 21:12:18.004293 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 21:12:18.004306 | orchestrator | Friday 29 August 2025 21:10:39 +0000 (0:00:40.882) 0:02:59.462 ********* 2025-08-29 21:12:18.004313 | orchestrator | 2025-08-29 21:12:18.004321 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 21:12:18.004329 | orchestrator | Friday 29 August 2025 21:10:39 +0000 (0:00:00.158) 0:02:59.621 ********* 2025-08-29 21:12:18.004337 | orchestrator | 2025-08-29 21:12:18.004345 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 21:12:18.004353 | orchestrator | Friday 29 August 2025 21:10:39 +0000 (0:00:00.109) 0:02:59.730 ********* 2025-08-29 21:12:18.004361 | orchestrator | 2025-08-29 21:12:18.004368 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 21:12:18.004376 | orchestrator | Friday 29 August 2025 21:10:39 +0000 (0:00:00.074) 0:02:59.805 ********* 2025-08-29 21:12:18.004384 | orchestrator | 2025-08-29 21:12:18.004392 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 21:12:18.004400 | orchestrator | Friday 29 August 2025 21:10:40 +0000 (0:00:00.213) 0:03:00.018 ********* 2025-08-29 21:12:18.004408 | orchestrator | 2025-08-29 21:12:18.004416 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 21:12:18.004428 | orchestrator | Friday 29 August 2025 21:10:40 +0000 (0:00:00.050) 0:03:00.069 ********* 2025-08-29 21:12:18.004436 | orchestrator | 2025-08-29 21:12:18.004444 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-08-29 21:12:18.004452 | orchestrator | Friday 29 August 2025 21:10:40 +0000 (0:00:00.050) 0:03:00.119 ********* 2025-08-29 21:12:18.004474 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:12:18.004483 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:12:18.004490 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:12:18.004498 | orchestrator | 2025-08-29 21:12:18.004506 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-08-29 21:12:18.004514 | orchestrator | Friday 29 August 2025 21:11:14 +0000 (0:00:33.954) 0:03:34.074 ********* 2025-08-29 21:12:18.004522 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:12:18.004530 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:12:18.004538 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:12:18.004546 | orchestrator | 2025-08-29 21:12:18.004554 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:12:18.004562 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 21:12:18.004570 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 21:12:18.004579 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 21:12:18.004587 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 21:12:18.004594 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 21:12:18.004602 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 21:12:18.004610 | orchestrator | 2025-08-29 21:12:18.004618 | orchestrator | 2025-08-29 21:12:18.004626 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:12:18.004634 | orchestrator | Friday 29 August 2025 21:12:15 +0000 (0:01:01.903) 0:04:35.978 ********* 2025-08-29 21:12:18.004642 | orchestrator | =============================================================================== 2025-08-29 21:12:18.004650 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 61.90s 2025-08-29 21:12:18.004663 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.88s 2025-08-29 21:12:18.004671 | orchestrator | neutron : Restart neutron-server container ----------------------------- 33.95s 2025-08-29 21:12:18.004678 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.11s 2025-08-29 21:12:18.004690 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.92s 2025-08-29 21:12:18.004698 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.27s 2025-08-29 21:12:18.004707 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.34s 2025-08-29 21:12:18.004714 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.06s 2025-08-29 21:12:18.004722 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.88s 2025-08-29 21:12:18.004730 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.81s 2025-08-29 21:12:18.004738 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.71s 2025-08-29 21:12:18.004746 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.35s 2025-08-29 21:12:18.004754 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.32s 2025-08-29 21:12:18.004762 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.25s 2025-08-29 21:12:18.004770 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.22s 2025-08-29 21:12:18.004778 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.13s 2025-08-29 21:12:18.004786 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.11s 2025-08-29 21:12:18.004793 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.10s 2025-08-29 21:12:18.004801 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.07s 2025-08-29 21:12:18.004809 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.07s 2025-08-29 21:12:18.004817 | orchestrator | 2025-08-29 21:12:17 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:18.004842 | orchestrator | 2025-08-29 21:12:17 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:18.004852 | orchestrator | 2025-08-29 21:12:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:21.024887 | orchestrator | 2025-08-29 21:12:21 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:21.026242 | orchestrator | 2025-08-29 21:12:21 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:21.028696 | orchestrator | 2025-08-29 21:12:21 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:21.032369 | orchestrator | 2025-08-29 21:12:21 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:21.032402 | orchestrator | 2025-08-29 21:12:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:24.050155 | orchestrator | 2025-08-29 21:12:24 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:24.051040 | orchestrator | 2025-08-29 21:12:24 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:24.051641 | orchestrator | 2025-08-29 21:12:24 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:24.052969 | orchestrator | 2025-08-29 21:12:24 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:24.052997 | orchestrator | 2025-08-29 21:12:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:27.080079 | orchestrator | 2025-08-29 21:12:27 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:27.080191 | orchestrator | 2025-08-29 21:12:27 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:27.080616 | orchestrator | 2025-08-29 21:12:27 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:27.081261 | orchestrator | 2025-08-29 21:12:27 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:27.081283 | orchestrator | 2025-08-29 21:12:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:30.110687 | orchestrator | 2025-08-29 21:12:30 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:30.111041 | orchestrator | 2025-08-29 21:12:30 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:30.111506 | orchestrator | 2025-08-29 21:12:30 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:30.112131 | orchestrator | 2025-08-29 21:12:30 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:30.112226 | orchestrator | 2025-08-29 21:12:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:33.135382 | orchestrator | 2025-08-29 21:12:33 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:33.136333 | orchestrator | 2025-08-29 21:12:33 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:33.136862 | orchestrator | 2025-08-29 21:12:33 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:33.137465 | orchestrator | 2025-08-29 21:12:33 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:33.137488 | orchestrator | 2025-08-29 21:12:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:36.170253 | orchestrator | 2025-08-29 21:12:36 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:36.170576 | orchestrator | 2025-08-29 21:12:36 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:36.171350 | orchestrator | 2025-08-29 21:12:36 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:36.173516 | orchestrator | 2025-08-29 21:12:36 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:36.173540 | orchestrator | 2025-08-29 21:12:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:39.199230 | orchestrator | 2025-08-29 21:12:39 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:39.199723 | orchestrator | 2025-08-29 21:12:39 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:39.200171 | orchestrator | 2025-08-29 21:12:39 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:39.200889 | orchestrator | 2025-08-29 21:12:39 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:39.200920 | orchestrator | 2025-08-29 21:12:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:42.227891 | orchestrator | 2025-08-29 21:12:42 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:42.229063 | orchestrator | 2025-08-29 21:12:42 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:42.231270 | orchestrator | 2025-08-29 21:12:42 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:42.232504 | orchestrator | 2025-08-29 21:12:42 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:42.232681 | orchestrator | 2025-08-29 21:12:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:45.323203 | orchestrator | 2025-08-29 21:12:45 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:45.323523 | orchestrator | 2025-08-29 21:12:45 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:45.324268 | orchestrator | 2025-08-29 21:12:45 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:45.324971 | orchestrator | 2025-08-29 21:12:45 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:45.325083 | orchestrator | 2025-08-29 21:12:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:48.347893 | orchestrator | 2025-08-29 21:12:48 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:48.348144 | orchestrator | 2025-08-29 21:12:48 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:48.348757 | orchestrator | 2025-08-29 21:12:48 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:48.350412 | orchestrator | 2025-08-29 21:12:48 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:48.350449 | orchestrator | 2025-08-29 21:12:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:51.371693 | orchestrator | 2025-08-29 21:12:51 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:51.371758 | orchestrator | 2025-08-29 21:12:51 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:51.372629 | orchestrator | 2025-08-29 21:12:51 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:51.373930 | orchestrator | 2025-08-29 21:12:51 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:51.373947 | orchestrator | 2025-08-29 21:12:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:54.405501 | orchestrator | 2025-08-29 21:12:54 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:54.405590 | orchestrator | 2025-08-29 21:12:54 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:54.407133 | orchestrator | 2025-08-29 21:12:54 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:54.408475 | orchestrator | 2025-08-29 21:12:54 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:54.408498 | orchestrator | 2025-08-29 21:12:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:12:57.448851 | orchestrator | 2025-08-29 21:12:57 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:12:57.449258 | orchestrator | 2025-08-29 21:12:57 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:12:57.450935 | orchestrator | 2025-08-29 21:12:57 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:12:57.452430 | orchestrator | 2025-08-29 21:12:57 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:12:57.452452 | orchestrator | 2025-08-29 21:12:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:00.493976 | orchestrator | 2025-08-29 21:13:00 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:00.496966 | orchestrator | 2025-08-29 21:13:00 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:00.499466 | orchestrator | 2025-08-29 21:13:00 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:00.502082 | orchestrator | 2025-08-29 21:13:00 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:13:00.502634 | orchestrator | 2025-08-29 21:13:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:03.555633 | orchestrator | 2025-08-29 21:13:03 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:03.559284 | orchestrator | 2025-08-29 21:13:03 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:03.561243 | orchestrator | 2025-08-29 21:13:03 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:03.563257 | orchestrator | 2025-08-29 21:13:03 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:13:03.563640 | orchestrator | 2025-08-29 21:13:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:06.599211 | orchestrator | 2025-08-29 21:13:06 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:06.601241 | orchestrator | 2025-08-29 21:13:06 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:06.602961 | orchestrator | 2025-08-29 21:13:06 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:06.604882 | orchestrator | 2025-08-29 21:13:06 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:13:06.604934 | orchestrator | 2025-08-29 21:13:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:09.642638 | orchestrator | 2025-08-29 21:13:09 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:09.643008 | orchestrator | 2025-08-29 21:13:09 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:09.643858 | orchestrator | 2025-08-29 21:13:09 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:09.645673 | orchestrator | 2025-08-29 21:13:09 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:13:09.645697 | orchestrator | 2025-08-29 21:13:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:12.688452 | orchestrator | 2025-08-29 21:13:12 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:12.690896 | orchestrator | 2025-08-29 21:13:12 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:12.692898 | orchestrator | 2025-08-29 21:13:12 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:12.694624 | orchestrator | 2025-08-29 21:13:12 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:13:12.694903 | orchestrator | 2025-08-29 21:13:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:15.742637 | orchestrator | 2025-08-29 21:13:15 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:15.745419 | orchestrator | 2025-08-29 21:13:15 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:15.748788 | orchestrator | 2025-08-29 21:13:15 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:15.751733 | orchestrator | 2025-08-29 21:13:15 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:13:15.751766 | orchestrator | 2025-08-29 21:13:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:18.797101 | orchestrator | 2025-08-29 21:13:18 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:18.798449 | orchestrator | 2025-08-29 21:13:18 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:18.799531 | orchestrator | 2025-08-29 21:13:18 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:18.801113 | orchestrator | 2025-08-29 21:13:18 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:13:18.801134 | orchestrator | 2025-08-29 21:13:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:21.839313 | orchestrator | 2025-08-29 21:13:21 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:21.842118 | orchestrator | 2025-08-29 21:13:21 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:21.843933 | orchestrator | 2025-08-29 21:13:21 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:21.847197 | orchestrator | 2025-08-29 21:13:21 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:13:21.847221 | orchestrator | 2025-08-29 21:13:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:24.918086 | orchestrator | 2025-08-29 21:13:24 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:24.918479 | orchestrator | 2025-08-29 21:13:24 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:24.919240 | orchestrator | 2025-08-29 21:13:24 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:24.919965 | orchestrator | 2025-08-29 21:13:24 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:13:24.920136 | orchestrator | 2025-08-29 21:13:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:27.969367 | orchestrator | 2025-08-29 21:13:27 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:27.973329 | orchestrator | 2025-08-29 21:13:27 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:27.976467 | orchestrator | 2025-08-29 21:13:27 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:27.977926 | orchestrator | 2025-08-29 21:13:27 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state STARTED 2025-08-29 21:13:27.978385 | orchestrator | 2025-08-29 21:13:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:31.023076 | orchestrator | 2025-08-29 21:13:31 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:13:31.023971 | orchestrator | 2025-08-29 21:13:31 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:31.024987 | orchestrator | 2025-08-29 21:13:31 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:31.025703 | orchestrator | 2025-08-29 21:13:31 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:31.030658 | orchestrator | 2025-08-29 21:13:31 | INFO  | Task 3e593dea-3cf4-45dc-aed1-db120c3c4dff is in state SUCCESS 2025-08-29 21:13:31.031234 | orchestrator | 2025-08-29 21:13:31.032748 | orchestrator | 2025-08-29 21:13:31.032881 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:13:31.032898 | orchestrator | 2025-08-29 21:13:31.032998 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:13:31.033019 | orchestrator | Friday 29 August 2025 21:10:23 +0000 (0:00:00.250) 0:00:00.250 ********* 2025-08-29 21:13:31.033037 | orchestrator | ok: [testbed-manager] 2025-08-29 21:13:31.034263 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:13:31.034291 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:13:31.034304 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:13:31.034344 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:13:31.034356 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:13:31.034367 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:13:31.034378 | orchestrator | 2025-08-29 21:13:31.034390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:13:31.034402 | orchestrator | Friday 29 August 2025 21:10:24 +0000 (0:00:00.683) 0:00:00.934 ********* 2025-08-29 21:13:31.034413 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-08-29 21:13:31.034424 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-08-29 21:13:31.034434 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-08-29 21:13:31.034445 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-08-29 21:13:31.034456 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-08-29 21:13:31.034467 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-08-29 21:13:31.034477 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-08-29 21:13:31.034488 | orchestrator | 2025-08-29 21:13:31.034499 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-08-29 21:13:31.034509 | orchestrator | 2025-08-29 21:13:31.034520 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 21:13:31.034531 | orchestrator | Friday 29 August 2025 21:10:25 +0000 (0:00:00.608) 0:00:01.543 ********* 2025-08-29 21:13:31.034542 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:13:31.034553 | orchestrator | 2025-08-29 21:13:31.034564 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-08-29 21:13:31.034575 | orchestrator | Friday 29 August 2025 21:10:26 +0000 (0:00:01.379) 0:00:02.922 ********* 2025-08-29 21:13:31.034588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.034616 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 21:13:31.034630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.034642 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.034693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.034707 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.034719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.034730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.034742 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.034761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.034772 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.034835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.034849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.034862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.034876 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 21:13:31.034890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.034907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.034919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.034943 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.034955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.034967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.034978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.034990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035092 | orchestrator | 2025-08-29 21:13:31.035104 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 21:13:31.035115 | orchestrator | Friday 29 August 2025 21:10:29 +0000 (0:00:03.008) 0:00:05.931 ********* 2025-08-29 21:13:31.035126 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:13:31.035137 | orchestrator | 2025-08-29 21:13:31.035148 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-08-29 21:13:31.035159 | orchestrator | Friday 29 August 2025 21:10:30 +0000 (0:00:01.196) 0:00:07.128 ********* 2025-08-29 21:13:31.035171 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 21:13:31.035193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.035206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.035223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.035236 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.035247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.035259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.035270 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.035286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035304 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035387 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 21:13:31.035411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035477 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.035552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.035586 | orchestrator | 2025-08-29 21:13:31.035598 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-08-29 21:13:31.035609 | orchestrator | Friday 29 August 2025 21:10:36 +0000 (0:00:06.284) 0:00:13.412 ********* 2025-08-29 21:13:31.035626 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 21:13:31.035642 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.035654 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.035673 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 21:13:31.035686 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.035697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.035714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.035726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.035742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.035753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.035765 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:13:31.035798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.035811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.035823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.035835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.035853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.035869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.035881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.035893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.035911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.035923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.035935 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:31.035946 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:31.035957 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:31.035969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.035987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.035998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036010 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.036025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.036037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036066 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.036078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.036089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036118 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.036129 | orchestrator | 2025-08-29 21:13:31.036140 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-08-29 21:13:31.036151 | orchestrator | Friday 29 August 2025 21:10:38 +0000 (0:00:01.779) 0:00:15.192 ********* 2025-08-29 21:13:31.036167 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 21:13:31.036179 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.036190 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036210 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 21:13:31.036228 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.036240 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:13:31.036251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.036263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.036279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.036290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.036320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.036341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.036353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.036365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.036388 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:31.036403 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:31.036414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.036426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.036443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.036461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 21:13:31.036484 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:31.036496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.036507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036535 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.036546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.036564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036594 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.036605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 21:13:31.036617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 21:13:31.036640 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.036651 | orchestrator | 2025-08-29 21:13:31.036662 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-08-29 21:13:31.036674 | orchestrator | Friday 29 August 2025 21:10:40 +0000 (0:00:02.067) 0:00:17.259 ********* 2025-08-29 21:13:31.036689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.036701 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 21:13:31.037200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.037223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.037235 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.037246 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.037258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.037269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.037286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.037298 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.037347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.037361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.037373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.037384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.037396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.037412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.037425 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 21:13:31.037469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.037482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.037494 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.037506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.037517 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.037533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.037551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.037588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.037654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.037668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.037725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.037737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.037749 | orchestrator | 2025-08-29 21:13:31.037760 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-08-29 21:13:31.037772 | orchestrator | Friday 29 August 2025 21:10:48 +0000 (0:00:07.257) 0:00:24.517 ********* 2025-08-29 21:13:31.038111 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 21:13:31.038135 | orchestrator | 2025-08-29 21:13:31.038148 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-08-29 21:13:31.038159 | orchestrator | Friday 29 August 2025 21:10:48 +0000 (0:00:00.876) 0:00:25.393 ********* 2025-08-29 21:13:31.038178 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1070145, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4017265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038199 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1070145, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4017265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038311 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1070145, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4017265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038330 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1070145, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4017265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.038341 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1070159, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4079301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038353 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1070159, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4079301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038364 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1070145, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4017265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038392 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1070142, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038404 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1070159, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4079301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038449 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1070145, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4017265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038462 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1070145, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4017265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038473 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1070159, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4079301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038485 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1070159, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4079301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038496 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1070142, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038518 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1070142, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038530 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1070142, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038571 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1070153, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038584 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1070153, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038595 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1070153, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038607 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1070142, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038619 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1070159, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4079301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038641 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1070153, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038652 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1070140, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3997266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038693 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1070159, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4079301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.038707 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1070140, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3997266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038718 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1070140, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3997266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038729 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1070146, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4025216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038748 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1070142, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038764 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1070146, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4025216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038775 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1070146, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4025216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038838 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1070153, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038852 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1070140, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3997266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038864 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1070151, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038875 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1070140, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3997266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038893 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1070151, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038909 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1070146, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4025216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038923 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1070142, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.038966 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1070151, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038981 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1070146, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4025216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.038994 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1070153, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039006 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1070151, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039026 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1070147, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4028523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039043 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1070147, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4028523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039057 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1070147, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4028523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039099 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1070151, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039113 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1070147, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4028523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039126 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1070140, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3997266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039139 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1070147, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4028523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039159 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1070144, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039177 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1070144, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039190 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1070144, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039232 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1070146, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4025216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039246 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070158, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039259 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1070144, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039279 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070158, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039290 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1070144, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039310 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070158, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039321 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070135, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3973706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039361 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1070151, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039374 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1070153, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.039385 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070135, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3973706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039403 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1070147, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4028523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039415 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070158, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039431 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070158, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039442 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1070172, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4102294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039453 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070135, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3973706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039494 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070135, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3973706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039508 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1070172, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4102294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039525 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1070144, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039536 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1070172, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4102294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039552 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070135, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3973706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039564 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1070156, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039575 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070158, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039617 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1070156, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039642 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1070172, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4102294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039653 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1070140, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3997266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.039665 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1070156, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039681 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070141, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4003112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039692 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1070172, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4102294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039704 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070135, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3973706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039721 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070141, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4003112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039739 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1070156, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039750 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070141, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4003112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039762 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1070136, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3977265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039777 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1070156, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039840 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1070172, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4102294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039853 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1070136, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3977265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039873 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070141, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4003112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039892 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1070156, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039903 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070141, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4003112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039915 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1070136, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3977265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039931 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1070136, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3977265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039943 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1070149, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4037266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039954 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070141, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4003112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039978 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1070149, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4037266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.039990 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1070149, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4037266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040001 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1070148, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.403202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040013 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1070148, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.403202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040023 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1070146, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4025216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040037 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1070136, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3977265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040048 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1070168, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4097297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040063 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:31.040078 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1070149, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4037266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040089 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1070136, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3977265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040099 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1070148, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.403202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040109 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1070168, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4097297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040119 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:31.040133 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1070149, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4037266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040144 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1070148, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.403202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040154 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1070168, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4097297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040173 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.040188 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1070148, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.403202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040199 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1070149, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4037266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040209 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1070168, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4097297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040219 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:31.040229 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1070168, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4097297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040239 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.040253 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1070151, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4047265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040263 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1070148, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.403202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040279 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1070168, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4097297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 21:13:31.040293 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.040304 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1070147, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4028523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040314 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1070144, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4007266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040324 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070158, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040334 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070135, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3973706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040348 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1070172, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4102294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040359 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1070156, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4067266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040375 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1070141, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4003112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040390 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1070136, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3977265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040400 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1070149, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4037266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040411 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1070148, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.403202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040421 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1070168, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.4097297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 21:13:31.040431 | orchestrator | 2025-08-29 21:13:31.040441 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-08-29 21:13:31.040451 | orchestrator | Friday 29 August 2025 21:11:12 +0000 (0:00:23.274) 0:00:48.668 ********* 2025-08-29 21:13:31.040461 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 21:13:31.040470 | orchestrator | 2025-08-29 21:13:31.040480 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-08-29 21:13:31.040490 | orchestrator | Friday 29 August 2025 21:11:13 +0000 (0:00:00.860) 0:00:49.528 ********* 2025-08-29 21:13:31.040500 | orchestrator | [WARNING]: Skipped 2025-08-29 21:13:31.040515 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040529 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-08-29 21:13:31.040539 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040549 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-08-29 21:13:31.040558 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 21:13:31.040568 | orchestrator | [WARNING]: Skipped 2025-08-29 21:13:31.040578 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040587 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-08-29 21:13:31.040597 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040607 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-08-29 21:13:31.040616 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 21:13:31.040626 | orchestrator | [WARNING]: Skipped 2025-08-29 21:13:31.040636 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040645 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-08-29 21:13:31.040655 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040664 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-08-29 21:13:31.040674 | orchestrator | [WARNING]: Skipped 2025-08-29 21:13:31.040684 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040694 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-08-29 21:13:31.040703 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040713 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-08-29 21:13:31.040723 | orchestrator | [WARNING]: Skipped 2025-08-29 21:13:31.040737 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040748 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-08-29 21:13:31.040757 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040767 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-08-29 21:13:31.040777 | orchestrator | [WARNING]: Skipped 2025-08-29 21:13:31.040800 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040810 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-08-29 21:13:31.040819 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040829 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-08-29 21:13:31.040839 | orchestrator | [WARNING]: Skipped 2025-08-29 21:13:31.040849 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040858 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-08-29 21:13:31.040868 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 21:13:31.040878 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-08-29 21:13:31.040887 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 21:13:31.040897 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 21:13:31.040907 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 21:13:31.040917 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 21:13:31.040927 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 21:13:31.040936 | orchestrator | 2025-08-29 21:13:31.040946 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-08-29 21:13:31.040956 | orchestrator | Friday 29 August 2025 21:11:14 +0000 (0:00:01.453) 0:00:50.982 ********* 2025-08-29 21:13:31.040966 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 21:13:31.040982 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 21:13:31.040992 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:31.041002 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:31.041011 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 21:13:31.041021 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:31.041031 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 21:13:31.041041 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.041050 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 21:13:31.041060 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.041070 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 21:13:31.041080 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.041090 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-08-29 21:13:31.041099 | orchestrator | 2025-08-29 21:13:31.041109 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-08-29 21:13:31.041119 | orchestrator | Friday 29 August 2025 21:11:28 +0000 (0:00:13.944) 0:01:04.927 ********* 2025-08-29 21:13:31.041129 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 21:13:31.041139 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:31.041148 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 21:13:31.041158 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:31.041172 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 21:13:31.041182 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:31.041192 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 21:13:31.041201 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.041211 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 21:13:31.041221 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.041231 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 21:13:31.041241 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.041250 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-08-29 21:13:31.041260 | orchestrator | 2025-08-29 21:13:31.041270 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-08-29 21:13:31.041280 | orchestrator | Friday 29 August 2025 21:11:31 +0000 (0:00:02.938) 0:01:07.865 ********* 2025-08-29 21:13:31.041290 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 21:13:31.041300 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 21:13:31.041310 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:31.041320 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 21:13:31.041330 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:31.041340 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:31.041355 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 21:13:31.041365 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.041375 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-08-29 21:13:31.041391 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 21:13:31.041400 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.041410 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 21:13:31.041420 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.041429 | orchestrator | 2025-08-29 21:13:31.041439 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-08-29 21:13:31.041449 | orchestrator | Friday 29 August 2025 21:11:32 +0000 (0:00:01.459) 0:01:09.325 ********* 2025-08-29 21:13:31.041459 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 21:13:31.041468 | orchestrator | 2025-08-29 21:13:31.041478 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-08-29 21:13:31.041488 | orchestrator | Friday 29 August 2025 21:11:33 +0000 (0:00:00.693) 0:01:10.019 ********* 2025-08-29 21:13:31.041497 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:13:31.041507 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:31.041517 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:31.041526 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:31.041536 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.041546 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.041555 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.041565 | orchestrator | 2025-08-29 21:13:31.041575 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-08-29 21:13:31.041585 | orchestrator | Friday 29 August 2025 21:11:34 +0000 (0:00:00.663) 0:01:10.682 ********* 2025-08-29 21:13:31.041594 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:13:31.041604 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.041614 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.041623 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.041633 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:31.041642 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:13:31.041652 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:13:31.041661 | orchestrator | 2025-08-29 21:13:31.041671 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-08-29 21:13:31.041681 | orchestrator | Friday 29 August 2025 21:11:36 +0000 (0:00:02.483) 0:01:13.166 ********* 2025-08-29 21:13:31.041691 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 21:13:31.041700 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:13:31.041710 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 21:13:31.041720 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 21:13:31.041730 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 21:13:31.041739 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:31.041749 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:31.041759 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:31.041768 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 21:13:31.041778 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.041800 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 21:13:31.041810 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.041824 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 21:13:31.041834 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.041843 | orchestrator | 2025-08-29 21:13:31.041853 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-08-29 21:13:31.041868 | orchestrator | Friday 29 August 2025 21:11:39 +0000 (0:00:02.407) 0:01:15.574 ********* 2025-08-29 21:13:31.041878 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 21:13:31.041888 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:31.041897 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 21:13:31.041907 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:31.041917 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 21:13:31.041927 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:31.041936 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 21:13:31.041946 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.041956 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 21:13:31.041966 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.041975 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-08-29 21:13:31.041985 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 21:13:31.041999 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.042009 | orchestrator | 2025-08-29 21:13:31.042042 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-08-29 21:13:31.042054 | orchestrator | Friday 29 August 2025 21:11:41 +0000 (0:00:02.009) 0:01:17.583 ********* 2025-08-29 21:13:31.042064 | orchestrator | [WARNING]: Skipped 2025-08-29 21:13:31.042074 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-08-29 21:13:31.042084 | orchestrator | due to this access issue: 2025-08-29 21:13:31.042094 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-08-29 21:13:31.042103 | orchestrator | not a directory 2025-08-29 21:13:31.042114 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 21:13:31.042123 | orchestrator | 2025-08-29 21:13:31.042133 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-08-29 21:13:31.042143 | orchestrator | Friday 29 August 2025 21:11:42 +0000 (0:00:00.947) 0:01:18.531 ********* 2025-08-29 21:13:31.042153 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:13:31.042163 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:31.042172 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:31.042182 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:31.042192 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.042202 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.042211 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.042221 | orchestrator | 2025-08-29 21:13:31.042231 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-08-29 21:13:31.042241 | orchestrator | Friday 29 August 2025 21:11:42 +0000 (0:00:00.699) 0:01:19.230 ********* 2025-08-29 21:13:31.042250 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:13:31.042260 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:31.042270 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:31.042280 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:31.042289 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:13:31.042299 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:13:31.042309 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:13:31.042319 | orchestrator | 2025-08-29 21:13:31.042329 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-08-29 21:13:31.042338 | orchestrator | Friday 29 August 2025 21:11:43 +0000 (0:00:00.768) 0:01:19.999 ********* 2025-08-29 21:13:31.042349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.042365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.042380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.042390 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 21:13:31.042407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.042418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.042428 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.042438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 21:13:31.042454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.042472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.042483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.042493 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.042508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.042519 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.042530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.042546 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 21:13:31.042558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.042568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.042583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.042594 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.042604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.042642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.042653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.042664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.042677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.042688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 21:13:31.042703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.042714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.042724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 21:13:31.042739 | orchestrator | 2025-08-29 21:13:31.042749 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-08-29 21:13:31.042759 | orchestrator | Friday 29 August 2025 21:11:48 +0000 (0:00:04.830) 0:01:24.829 ********* 2025-08-29 21:13:31.042769 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 21:13:31.042778 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:13:31.042828 | orchestrator | 2025-08-29 21:13:31.042838 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 21:13:31.042848 | orchestrator | Friday 29 August 2025 21:11:49 +0000 (0:00:00.978) 0:01:25.807 ********* 2025-08-29 21:13:31.042857 | orchestrator | 2025-08-29 21:13:31.042867 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 21:13:31.042877 | orchestrator | Friday 29 August 2025 21:11:49 +0000 (0:00:00.058) 0:01:25.866 ********* 2025-08-29 21:13:31.042886 | orchestrator | 2025-08-29 21:13:31.042896 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 21:13:31.042906 | orchestrator | Friday 29 August 2025 21:11:49 +0000 (0:00:00.056) 0:01:25.922 ********* 2025-08-29 21:13:31.042915 | orchestrator | 2025-08-29 21:13:31.042925 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 21:13:31.042934 | orchestrator | Friday 29 August 2025 21:11:49 +0000 (0:00:00.161) 0:01:26.083 ********* 2025-08-29 21:13:31.042944 | orchestrator | 2025-08-29 21:13:31.042953 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 21:13:31.042963 | orchestrator | Friday 29 August 2025 21:11:49 +0000 (0:00:00.057) 0:01:26.141 ********* 2025-08-29 21:13:31.042973 | orchestrator | 2025-08-29 21:13:31.042982 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 21:13:31.042992 | orchestrator | Friday 29 August 2025 21:11:49 +0000 (0:00:00.056) 0:01:26.198 ********* 2025-08-29 21:13:31.043001 | orchestrator | 2025-08-29 21:13:31.043011 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 21:13:31.043020 | orchestrator | Friday 29 August 2025 21:11:49 +0000 (0:00:00.057) 0:01:26.255 ********* 2025-08-29 21:13:31.043030 | orchestrator | 2025-08-29 21:13:31.043039 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-08-29 21:13:31.043053 | orchestrator | Friday 29 August 2025 21:11:49 +0000 (0:00:00.076) 0:01:26.331 ********* 2025-08-29 21:13:31.043063 | orchestrator | changed: [testbed-manager] 2025-08-29 21:13:31.043073 | orchestrator | 2025-08-29 21:13:31.043083 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-08-29 21:13:31.043092 | orchestrator | Friday 29 August 2025 21:12:12 +0000 (0:00:22.955) 0:01:49.286 ********* 2025-08-29 21:13:31.043102 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:13:31.043111 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:13:31.043121 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:13:31.043131 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:13:31.043140 | orchestrator | changed: [testbed-manager] 2025-08-29 21:13:31.043150 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:13:31.043159 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:31.043168 | orchestrator | 2025-08-29 21:13:31.043178 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-08-29 21:13:31.043188 | orchestrator | Friday 29 August 2025 21:12:28 +0000 (0:00:15.793) 0:02:05.080 ********* 2025-08-29 21:13:31.043197 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:13:31.043207 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:13:31.043222 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:31.043232 | orchestrator | 2025-08-29 21:13:31.043242 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-08-29 21:13:31.043251 | orchestrator | Friday 29 August 2025 21:12:34 +0000 (0:00:05.780) 0:02:10.860 ********* 2025-08-29 21:13:31.043261 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:13:31.043270 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:13:31.043280 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:31.043289 | orchestrator | 2025-08-29 21:13:31.043299 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-08-29 21:13:31.043309 | orchestrator | Friday 29 August 2025 21:12:45 +0000 (0:00:11.154) 0:02:22.014 ********* 2025-08-29 21:13:31.043318 | orchestrator | changed: [testbed-manager] 2025-08-29 21:13:31.043333 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:31.043343 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:13:31.043350 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:13:31.043358 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:13:31.043366 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:13:31.043374 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:13:31.043381 | orchestrator | 2025-08-29 21:13:31.043389 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-08-29 21:13:31.043397 | orchestrator | Friday 29 August 2025 21:13:00 +0000 (0:00:14.609) 0:02:36.624 ********* 2025-08-29 21:13:31.043405 | orchestrator | changed: [testbed-manager] 2025-08-29 21:13:31.043413 | orchestrator | 2025-08-29 21:13:31.043421 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-08-29 21:13:31.043429 | orchestrator | Friday 29 August 2025 21:13:08 +0000 (0:00:08.082) 0:02:44.706 ********* 2025-08-29 21:13:31.043437 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:13:31.043444 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:31.043452 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:13:31.043460 | orchestrator | 2025-08-29 21:13:31.043468 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-08-29 21:13:31.043476 | orchestrator | Friday 29 August 2025 21:13:18 +0000 (0:00:09.782) 0:02:54.489 ********* 2025-08-29 21:13:31.043484 | orchestrator | changed: [testbed-manager] 2025-08-29 21:13:31.043491 | orchestrator | 2025-08-29 21:13:31.043499 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-08-29 21:13:31.043507 | orchestrator | Friday 29 August 2025 21:13:22 +0000 (0:00:04.756) 0:02:59.245 ********* 2025-08-29 21:13:31.043515 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:13:31.043523 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:13:31.043531 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:13:31.043538 | orchestrator | 2025-08-29 21:13:31.043546 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:13:31.043554 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 21:13:31.043562 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 21:13:31.043570 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 21:13:31.043578 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 21:13:31.043586 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 21:13:31.043594 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 21:13:31.043602 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 21:13:31.043614 | orchestrator | 2025-08-29 21:13:31.043622 | orchestrator | 2025-08-29 21:13:31.043630 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:13:31.043638 | orchestrator | Friday 29 August 2025 21:13:29 +0000 (0:00:06.265) 0:03:05.511 ********* 2025-08-29 21:13:31.043646 | orchestrator | =============================================================================== 2025-08-29 21:13:31.043654 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.27s 2025-08-29 21:13:31.043662 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.96s 2025-08-29 21:13:31.043673 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.79s 2025-08-29 21:13:31.043681 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.61s 2025-08-29 21:13:31.043689 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.94s 2025-08-29 21:13:31.043697 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.15s 2025-08-29 21:13:31.043705 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.78s 2025-08-29 21:13:31.043713 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.08s 2025-08-29 21:13:31.043721 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.26s 2025-08-29 21:13:31.043728 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.28s 2025-08-29 21:13:31.043736 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.27s 2025-08-29 21:13:31.043744 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.78s 2025-08-29 21:13:31.043752 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.83s 2025-08-29 21:13:31.043760 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.76s 2025-08-29 21:13:31.043768 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.01s 2025-08-29 21:13:31.043776 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.94s 2025-08-29 21:13:31.043795 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.48s 2025-08-29 21:13:31.043803 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.41s 2025-08-29 21:13:31.043815 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.07s 2025-08-29 21:13:31.043823 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.01s 2025-08-29 21:13:31.043831 | orchestrator | 2025-08-29 21:13:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:34.057711 | orchestrator | 2025-08-29 21:13:34 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:13:34.058062 | orchestrator | 2025-08-29 21:13:34 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:34.058821 | orchestrator | 2025-08-29 21:13:34 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:34.059389 | orchestrator | 2025-08-29 21:13:34 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:34.059476 | orchestrator | 2025-08-29 21:13:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:37.086747 | orchestrator | 2025-08-29 21:13:37 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:13:37.087326 | orchestrator | 2025-08-29 21:13:37 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:37.087946 | orchestrator | 2025-08-29 21:13:37 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:37.088615 | orchestrator | 2025-08-29 21:13:37 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:37.088965 | orchestrator | 2025-08-29 21:13:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:40.117876 | orchestrator | 2025-08-29 21:13:40 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:13:40.120026 | orchestrator | 2025-08-29 21:13:40 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:40.124165 | orchestrator | 2025-08-29 21:13:40 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:40.125594 | orchestrator | 2025-08-29 21:13:40 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:40.126105 | orchestrator | 2025-08-29 21:13:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:43.166522 | orchestrator | 2025-08-29 21:13:43 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:13:43.168843 | orchestrator | 2025-08-29 21:13:43 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:43.170255 | orchestrator | 2025-08-29 21:13:43 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:43.172266 | orchestrator | 2025-08-29 21:13:43 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:43.172291 | orchestrator | 2025-08-29 21:13:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:46.209460 | orchestrator | 2025-08-29 21:13:46 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:13:46.211896 | orchestrator | 2025-08-29 21:13:46 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:46.213865 | orchestrator | 2025-08-29 21:13:46 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:46.215567 | orchestrator | 2025-08-29 21:13:46 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:46.215588 | orchestrator | 2025-08-29 21:13:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:49.255501 | orchestrator | 2025-08-29 21:13:49 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:13:49.257517 | orchestrator | 2025-08-29 21:13:49 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:49.259515 | orchestrator | 2025-08-29 21:13:49 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:49.260919 | orchestrator | 2025-08-29 21:13:49 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:49.260941 | orchestrator | 2025-08-29 21:13:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:52.292948 | orchestrator | 2025-08-29 21:13:52 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:13:52.294269 | orchestrator | 2025-08-29 21:13:52 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state STARTED 2025-08-29 21:13:52.296442 | orchestrator | 2025-08-29 21:13:52 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:52.297534 | orchestrator | 2025-08-29 21:13:52 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:52.298038 | orchestrator | 2025-08-29 21:13:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:55.342605 | orchestrator | 2025-08-29 21:13:55 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:13:55.344941 | orchestrator | 2025-08-29 21:13:55 | INFO  | Task ec1ab385-8dbf-4c67-ba01-62b04adc2ac1 is in state SUCCESS 2025-08-29 21:13:55.346469 | orchestrator | 2025-08-29 21:13:55.346501 | orchestrator | 2025-08-29 21:13:55.346509 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:13:55.346517 | orchestrator | 2025-08-29 21:13:55.346525 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:13:55.346532 | orchestrator | Friday 29 August 2025 21:11:11 +0000 (0:00:00.203) 0:00:00.203 ********* 2025-08-29 21:13:55.346539 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:13:55.346547 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:13:55.346554 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:13:55.346561 | orchestrator | 2025-08-29 21:13:55.346568 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:13:55.346574 | orchestrator | Friday 29 August 2025 21:11:12 +0000 (0:00:00.237) 0:00:00.441 ********* 2025-08-29 21:13:55.346581 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-08-29 21:13:55.346589 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-08-29 21:13:55.346596 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-08-29 21:13:55.346603 | orchestrator | 2025-08-29 21:13:55.346610 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-08-29 21:13:55.346617 | orchestrator | 2025-08-29 21:13:55.346624 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 21:13:55.346631 | orchestrator | Friday 29 August 2025 21:11:12 +0000 (0:00:00.335) 0:00:00.777 ********* 2025-08-29 21:13:55.346637 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:13:55.346645 | orchestrator | 2025-08-29 21:13:55.346652 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-08-29 21:13:55.346659 | orchestrator | Friday 29 August 2025 21:11:12 +0000 (0:00:00.497) 0:00:01.274 ********* 2025-08-29 21:13:55.346666 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-08-29 21:13:55.346673 | orchestrator | 2025-08-29 21:13:55.346680 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-08-29 21:13:55.346687 | orchestrator | Friday 29 August 2025 21:11:16 +0000 (0:00:03.600) 0:00:04.874 ********* 2025-08-29 21:13:55.346694 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-08-29 21:13:55.346701 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-08-29 21:13:55.346708 | orchestrator | 2025-08-29 21:13:55.346715 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-08-29 21:13:55.346722 | orchestrator | Friday 29 August 2025 21:11:22 +0000 (0:00:06.263) 0:00:11.138 ********* 2025-08-29 21:13:55.346729 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 21:13:55.346737 | orchestrator | 2025-08-29 21:13:55.346744 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-08-29 21:13:55.346751 | orchestrator | Friday 29 August 2025 21:11:26 +0000 (0:00:03.264) 0:00:14.402 ********* 2025-08-29 21:13:55.346758 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 21:13:55.346783 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-08-29 21:13:55.346792 | orchestrator | 2025-08-29 21:13:55.346798 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-08-29 21:13:55.346805 | orchestrator | Friday 29 August 2025 21:11:29 +0000 (0:00:03.856) 0:00:18.259 ********* 2025-08-29 21:13:55.346812 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 21:13:55.346820 | orchestrator | 2025-08-29 21:13:55.346840 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-08-29 21:13:55.346847 | orchestrator | Friday 29 August 2025 21:11:33 +0000 (0:00:03.512) 0:00:21.771 ********* 2025-08-29 21:13:55.346854 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-08-29 21:13:55.346861 | orchestrator | 2025-08-29 21:13:55.346868 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-08-29 21:13:55.346885 | orchestrator | Friday 29 August 2025 21:11:37 +0000 (0:00:03.937) 0:00:25.709 ********* 2025-08-29 21:13:55.346912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.346922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.346934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.346947 | orchestrator | 2025-08-29 21:13:55.346954 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 21:13:55.346961 | orchestrator | Friday 29 August 2025 21:11:41 +0000 (0:00:04.587) 0:00:30.296 ********* 2025-08-29 21:13:55.346972 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:13:55.346980 | orchestrator | 2025-08-29 21:13:55.346987 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-08-29 21:13:55.346993 | orchestrator | Friday 29 August 2025 21:11:42 +0000 (0:00:00.535) 0:00:30.832 ********* 2025-08-29 21:13:55.347000 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:13:55.347007 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:55.347014 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:13:55.347021 | orchestrator | 2025-08-29 21:13:55.347028 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-08-29 21:13:55.347035 | orchestrator | Friday 29 August 2025 21:11:46 +0000 (0:00:03.671) 0:00:34.504 ********* 2025-08-29 21:13:55.347042 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 21:13:55.347049 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 21:13:55.347056 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 21:13:55.347063 | orchestrator | 2025-08-29 21:13:55.347070 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-08-29 21:13:55.347077 | orchestrator | Friday 29 August 2025 21:11:47 +0000 (0:00:01.470) 0:00:35.974 ********* 2025-08-29 21:13:55.347084 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 21:13:55.347091 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 21:13:55.347098 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 21:13:55.347105 | orchestrator | 2025-08-29 21:13:55.347112 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-08-29 21:13:55.347119 | orchestrator | Friday 29 August 2025 21:11:48 +0000 (0:00:01.244) 0:00:37.219 ********* 2025-08-29 21:13:55.347126 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:13:55.347133 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:13:55.347140 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:13:55.347147 | orchestrator | 2025-08-29 21:13:55.347154 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-08-29 21:13:55.347161 | orchestrator | Friday 29 August 2025 21:11:49 +0000 (0:00:00.761) 0:00:37.981 ********* 2025-08-29 21:13:55.347172 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.347179 | orchestrator | 2025-08-29 21:13:55.347186 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-08-29 21:13:55.347193 | orchestrator | Friday 29 August 2025 21:11:49 +0000 (0:00:00.122) 0:00:38.104 ********* 2025-08-29 21:13:55.347200 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.347207 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:55.347213 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:55.347220 | orchestrator | 2025-08-29 21:13:55.347227 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 21:13:55.347234 | orchestrator | Friday 29 August 2025 21:11:50 +0000 (0:00:00.259) 0:00:38.363 ********* 2025-08-29 21:13:55.347241 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:13:55.347248 | orchestrator | 2025-08-29 21:13:55.347255 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-08-29 21:13:55.347266 | orchestrator | Friday 29 August 2025 21:11:50 +0000 (0:00:00.489) 0:00:38.853 ********* 2025-08-29 21:13:55.347278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.347286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.347303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.347311 | orchestrator | 2025-08-29 21:13:55.347318 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-08-29 21:13:55.347325 | orchestrator | Friday 29 August 2025 21:11:53 +0000 (0:00:03.267) 0:00:42.120 ********* 2025-08-29 21:13:55.347338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 21:13:55.347349 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:55.347359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 21:13:55.347366 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.347379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 21:13:55.347387 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:55.347394 | orchestrator | 2025-08-29 21:13:55.347401 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-08-29 21:13:55.347413 | orchestrator | Friday 29 August 2025 21:11:56 +0000 (0:00:02.910) 0:00:45.031 ********* 2025-08-29 21:13:55.347424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 21:13:55.347432 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:55.347443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 21:13:55.347451 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:55.347459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 21:13:55.347470 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.347478 | orchestrator | 2025-08-29 21:13:55.347485 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-08-29 21:13:55.347491 | orchestrator | Friday 29 August 2025 21:11:59 +0000 (0:00:02.983) 0:00:48.014 ********* 2025-08-29 21:13:55.347498 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:55.347513 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.347519 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:55.347527 | orchestrator | 2025-08-29 21:13:55.347534 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-08-29 21:13:55.347540 | orchestrator | Friday 29 August 2025 21:12:02 +0000 (0:00:03.185) 0:00:51.199 ********* 2025-08-29 21:13:55.347553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.347561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.347576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.347585 | orchestrator | 2025-08-29 21:13:55.347592 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-08-29 21:13:55.347598 | orchestrator | Friday 29 August 2025 21:12:06 +0000 (0:00:03.460) 0:00:54.660 ********* 2025-08-29 21:13:55.347605 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:13:55.347612 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:13:55.347620 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:55.347627 | orchestrator | 2025-08-29 21:13:55.347634 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-08-29 21:13:55.347644 | orchestrator | Friday 29 August 2025 21:12:11 +0000 (0:00:05.109) 0:00:59.769 ********* 2025-08-29 21:13:55.347656 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.347663 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:55.347670 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:55.347677 | orchestrator | 2025-08-29 21:13:55.347684 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-08-29 21:13:55.347691 | orchestrator | Friday 29 August 2025 21:12:16 +0000 (0:00:05.477) 0:01:05.247 ********* 2025-08-29 21:13:55.347698 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.347705 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:55.347712 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:55.347719 | orchestrator | 2025-08-29 21:13:55.347726 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-08-29 21:13:55.347733 | orchestrator | Friday 29 August 2025 21:12:22 +0000 (0:00:05.138) 0:01:10.385 ********* 2025-08-29 21:13:55.347740 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:55.347747 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.347754 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:55.347761 | orchestrator | 2025-08-29 21:13:55.347791 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-08-29 21:13:55.347799 | orchestrator | Friday 29 August 2025 21:12:25 +0000 (0:00:03.431) 0:01:13.816 ********* 2025-08-29 21:13:55.347806 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:55.347813 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.347820 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:55.347827 | orchestrator | 2025-08-29 21:13:55.347834 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-08-29 21:13:55.347840 | orchestrator | Friday 29 August 2025 21:12:28 +0000 (0:00:02.854) 0:01:16.671 ********* 2025-08-29 21:13:55.347847 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.347854 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:55.347861 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:55.347868 | orchestrator | 2025-08-29 21:13:55.347875 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-08-29 21:13:55.347882 | orchestrator | Friday 29 August 2025 21:12:28 +0000 (0:00:00.250) 0:01:16.922 ********* 2025-08-29 21:13:55.347889 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 21:13:55.347896 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:55.347902 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 21:13:55.347909 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.347916 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 21:13:55.347923 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:55.347930 | orchestrator | 2025-08-29 21:13:55.347937 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-08-29 21:13:55.347944 | orchestrator | Friday 29 August 2025 21:12:32 +0000 (0:00:04.318) 0:01:21.241 ********* 2025-08-29 21:13:55.347956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.347974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.347986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 21:13:55.347998 | orchestrator | 2025-08-29 21:13:55.348005 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 21:13:55.348010 | orchestrator | Friday 29 August 2025 21:12:37 +0000 (0:00:04.652) 0:01:25.893 ********* 2025-08-29 21:13:55.348016 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:13:55.348022 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:13:55.348029 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:13:55.348036 | orchestrator | 2025-08-29 21:13:55.348043 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-08-29 21:13:55.348050 | orchestrator | Friday 29 August 2025 21:12:37 +0000 (0:00:00.278) 0:01:26.171 ********* 2025-08-29 21:13:55.348056 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:55.348062 | orchestrator | 2025-08-29 21:13:55.348069 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-08-29 21:13:55.348076 | orchestrator | Friday 29 August 2025 21:12:40 +0000 (0:00:02.144) 0:01:28.316 ********* 2025-08-29 21:13:55.348083 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:55.348090 | orchestrator | 2025-08-29 21:13:55.348097 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-08-29 21:13:55.348103 | orchestrator | Friday 29 August 2025 21:12:42 +0000 (0:00:02.134) 0:01:30.450 ********* 2025-08-29 21:13:55.348110 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:55.348117 | orchestrator | 2025-08-29 21:13:55.348124 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-08-29 21:13:55.348134 | orchestrator | Friday 29 August 2025 21:12:44 +0000 (0:00:02.068) 0:01:32.519 ********* 2025-08-29 21:13:55.348141 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:55.348148 | orchestrator | 2025-08-29 21:13:55.348155 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-08-29 21:13:55.348162 | orchestrator | Friday 29 August 2025 21:13:21 +0000 (0:00:37.343) 0:02:09.863 ********* 2025-08-29 21:13:55.348169 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:55.348175 | orchestrator | 2025-08-29 21:13:55.348182 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 21:13:55.348189 | orchestrator | Friday 29 August 2025 21:13:23 +0000 (0:00:02.169) 0:02:12.033 ********* 2025-08-29 21:13:55.348196 | orchestrator | 2025-08-29 21:13:55.348203 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 21:13:55.348210 | orchestrator | Friday 29 August 2025 21:13:24 +0000 (0:00:00.487) 0:02:12.520 ********* 2025-08-29 21:13:55.348217 | orchestrator | 2025-08-29 21:13:55.348224 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 21:13:55.348231 | orchestrator | Friday 29 August 2025 21:13:24 +0000 (0:00:00.119) 0:02:12.640 ********* 2025-08-29 21:13:55.348238 | orchestrator | 2025-08-29 21:13:55.348245 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-08-29 21:13:55.348251 | orchestrator | Friday 29 August 2025 21:13:24 +0000 (0:00:00.200) 0:02:12.840 ********* 2025-08-29 21:13:55.348258 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:13:55.348265 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:13:55.348272 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:13:55.348279 | orchestrator | 2025-08-29 21:13:55.348286 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:13:55.348294 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 21:13:55.348303 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 21:13:55.348311 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 21:13:55.348324 | orchestrator | 2025-08-29 21:13:55.348333 | orchestrator | 2025-08-29 21:13:55.348341 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:13:55.348347 | orchestrator | Friday 29 August 2025 21:13:54 +0000 (0:00:30.122) 0:02:42.963 ********* 2025-08-29 21:13:55.348353 | orchestrator | =============================================================================== 2025-08-29 21:13:55.348361 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 37.34s 2025-08-29 21:13:55.348370 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.12s 2025-08-29 21:13:55.348378 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.26s 2025-08-29 21:13:55.348386 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.48s 2025-08-29 21:13:55.348393 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.14s 2025-08-29 21:13:55.348400 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.11s 2025-08-29 21:13:55.348412 | orchestrator | glance : Check glance containers ---------------------------------------- 4.65s 2025-08-29 21:13:55.348420 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.59s 2025-08-29 21:13:55.348427 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.32s 2025-08-29 21:13:55.348435 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.94s 2025-08-29 21:13:55.348443 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.86s 2025-08-29 21:13:55.348450 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.67s 2025-08-29 21:13:55.348457 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.60s 2025-08-29 21:13:55.348464 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.51s 2025-08-29 21:13:55.348470 | orchestrator | glance : Copying over config.json files for services -------------------- 3.46s 2025-08-29 21:13:55.348476 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.43s 2025-08-29 21:13:55.348483 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.27s 2025-08-29 21:13:55.348489 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.26s 2025-08-29 21:13:55.348495 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.19s 2025-08-29 21:13:55.348501 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 2.98s 2025-08-29 21:13:55.348507 | orchestrator | 2025-08-29 21:13:55 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:55.349649 | orchestrator | 2025-08-29 21:13:55 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:55.352206 | orchestrator | 2025-08-29 21:13:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:13:58.384711 | orchestrator | 2025-08-29 21:13:58 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:13:58.385977 | orchestrator | 2025-08-29 21:13:58 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:13:58.387831 | orchestrator | 2025-08-29 21:13:58 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:13:58.389564 | orchestrator | 2025-08-29 21:13:58 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:13:58.389594 | orchestrator | 2025-08-29 21:13:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:01.441220 | orchestrator | 2025-08-29 21:14:01 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:01.443789 | orchestrator | 2025-08-29 21:14:01 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:01.444347 | orchestrator | 2025-08-29 21:14:01 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:01.444999 | orchestrator | 2025-08-29 21:14:01 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:01.445021 | orchestrator | 2025-08-29 21:14:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:04.490845 | orchestrator | 2025-08-29 21:14:04 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:04.491663 | orchestrator | 2025-08-29 21:14:04 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:04.493344 | orchestrator | 2025-08-29 21:14:04 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:04.494537 | orchestrator | 2025-08-29 21:14:04 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:04.494663 | orchestrator | 2025-08-29 21:14:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:07.535057 | orchestrator | 2025-08-29 21:14:07 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:07.537652 | orchestrator | 2025-08-29 21:14:07 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:07.539888 | orchestrator | 2025-08-29 21:14:07 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:07.542076 | orchestrator | 2025-08-29 21:14:07 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:07.542103 | orchestrator | 2025-08-29 21:14:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:10.583639 | orchestrator | 2025-08-29 21:14:10 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:10.585294 | orchestrator | 2025-08-29 21:14:10 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:10.586775 | orchestrator | 2025-08-29 21:14:10 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:10.588484 | orchestrator | 2025-08-29 21:14:10 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:10.588527 | orchestrator | 2025-08-29 21:14:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:13.628021 | orchestrator | 2025-08-29 21:14:13 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:13.629196 | orchestrator | 2025-08-29 21:14:13 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:13.630458 | orchestrator | 2025-08-29 21:14:13 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:13.633176 | orchestrator | 2025-08-29 21:14:13 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:13.633418 | orchestrator | 2025-08-29 21:14:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:16.668403 | orchestrator | 2025-08-29 21:14:16 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:16.669931 | orchestrator | 2025-08-29 21:14:16 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:16.671454 | orchestrator | 2025-08-29 21:14:16 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:16.673004 | orchestrator | 2025-08-29 21:14:16 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:16.673028 | orchestrator | 2025-08-29 21:14:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:19.714573 | orchestrator | 2025-08-29 21:14:19 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:19.716189 | orchestrator | 2025-08-29 21:14:19 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:19.718359 | orchestrator | 2025-08-29 21:14:19 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:19.721291 | orchestrator | 2025-08-29 21:14:19 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:19.721371 | orchestrator | 2025-08-29 21:14:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:22.754494 | orchestrator | 2025-08-29 21:14:22 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:22.755920 | orchestrator | 2025-08-29 21:14:22 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:22.757282 | orchestrator | 2025-08-29 21:14:22 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:22.758958 | orchestrator | 2025-08-29 21:14:22 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:22.758983 | orchestrator | 2025-08-29 21:14:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:25.801285 | orchestrator | 2025-08-29 21:14:25 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:25.802903 | orchestrator | 2025-08-29 21:14:25 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:25.804231 | orchestrator | 2025-08-29 21:14:25 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:25.805502 | orchestrator | 2025-08-29 21:14:25 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:25.805543 | orchestrator | 2025-08-29 21:14:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:28.840357 | orchestrator | 2025-08-29 21:14:28 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:28.841811 | orchestrator | 2025-08-29 21:14:28 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:28.843498 | orchestrator | 2025-08-29 21:14:28 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:28.844810 | orchestrator | 2025-08-29 21:14:28 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:28.844835 | orchestrator | 2025-08-29 21:14:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:31.884448 | orchestrator | 2025-08-29 21:14:31 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:31.886387 | orchestrator | 2025-08-29 21:14:31 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:31.888327 | orchestrator | 2025-08-29 21:14:31 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:31.890435 | orchestrator | 2025-08-29 21:14:31 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:31.890649 | orchestrator | 2025-08-29 21:14:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:34.928228 | orchestrator | 2025-08-29 21:14:34 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:34.929280 | orchestrator | 2025-08-29 21:14:34 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:34.930094 | orchestrator | 2025-08-29 21:14:34 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:34.930916 | orchestrator | 2025-08-29 21:14:34 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:34.931176 | orchestrator | 2025-08-29 21:14:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:37.973971 | orchestrator | 2025-08-29 21:14:37 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:37.975304 | orchestrator | 2025-08-29 21:14:37 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:37.977332 | orchestrator | 2025-08-29 21:14:37 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:37.978794 | orchestrator | 2025-08-29 21:14:37 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:37.978819 | orchestrator | 2025-08-29 21:14:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:41.026419 | orchestrator | 2025-08-29 21:14:41 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:41.029509 | orchestrator | 2025-08-29 21:14:41 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:41.043073 | orchestrator | 2025-08-29 21:14:41 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:41.043117 | orchestrator | 2025-08-29 21:14:41 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:41.043129 | orchestrator | 2025-08-29 21:14:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:44.084313 | orchestrator | 2025-08-29 21:14:44 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:44.085569 | orchestrator | 2025-08-29 21:14:44 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:44.087766 | orchestrator | 2025-08-29 21:14:44 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:44.089176 | orchestrator | 2025-08-29 21:14:44 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:44.089205 | orchestrator | 2025-08-29 21:14:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:47.133821 | orchestrator | 2025-08-29 21:14:47 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:47.136253 | orchestrator | 2025-08-29 21:14:47 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state STARTED 2025-08-29 21:14:47.138372 | orchestrator | 2025-08-29 21:14:47 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:47.140860 | orchestrator | 2025-08-29 21:14:47 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:47.141180 | orchestrator | 2025-08-29 21:14:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:50.187398 | orchestrator | 2025-08-29 21:14:50 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:50.192389 | orchestrator | 2025-08-29 21:14:50 | INFO  | Task ce44fe80-7b59-45b1-96e5-05af6ef12493 is in state SUCCESS 2025-08-29 21:14:50.194438 | orchestrator | 2025-08-29 21:14:50.194469 | orchestrator | 2025-08-29 21:14:50.194478 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:14:50.194487 | orchestrator | 2025-08-29 21:14:50.194495 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:14:50.194503 | orchestrator | Friday 29 August 2025 21:11:38 +0000 (0:00:00.400) 0:00:00.400 ********* 2025-08-29 21:14:50.194511 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:14:50.194519 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:14:50.194526 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:14:50.194535 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:14:50.194542 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:14:50.194550 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:14:50.194557 | orchestrator | 2025-08-29 21:14:50.194582 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:14:50.194590 | orchestrator | Friday 29 August 2025 21:11:39 +0000 (0:00:00.704) 0:00:01.105 ********* 2025-08-29 21:14:50.194598 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-08-29 21:14:50.194606 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-08-29 21:14:50.194614 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-08-29 21:14:50.194621 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-08-29 21:14:50.194627 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-08-29 21:14:50.194634 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-08-29 21:14:50.194641 | orchestrator | 2025-08-29 21:14:50.194648 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-08-29 21:14:50.194655 | orchestrator | 2025-08-29 21:14:50.194662 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 21:14:50.194669 | orchestrator | Friday 29 August 2025 21:11:40 +0000 (0:00:01.066) 0:00:02.171 ********* 2025-08-29 21:14:50.194789 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:14:50.194798 | orchestrator | 2025-08-29 21:14:50.194846 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-08-29 21:14:50.194884 | orchestrator | Friday 29 August 2025 21:11:41 +0000 (0:00:01.112) 0:00:03.284 ********* 2025-08-29 21:14:50.194893 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-08-29 21:14:50.194899 | orchestrator | 2025-08-29 21:14:50.195085 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-08-29 21:14:50.195094 | orchestrator | Friday 29 August 2025 21:11:45 +0000 (0:00:03.456) 0:00:06.740 ********* 2025-08-29 21:14:50.195101 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-08-29 21:14:50.195108 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-08-29 21:14:50.195114 | orchestrator | 2025-08-29 21:14:50.195121 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-08-29 21:14:50.195128 | orchestrator | Friday 29 August 2025 21:11:51 +0000 (0:00:06.726) 0:00:13.467 ********* 2025-08-29 21:14:50.195163 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 21:14:50.195171 | orchestrator | 2025-08-29 21:14:50.195177 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-08-29 21:14:50.195184 | orchestrator | Friday 29 August 2025 21:11:54 +0000 (0:00:02.748) 0:00:16.216 ********* 2025-08-29 21:14:50.195190 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 21:14:50.195197 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-08-29 21:14:50.195203 | orchestrator | 2025-08-29 21:14:50.195210 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-08-29 21:14:50.195255 | orchestrator | Friday 29 August 2025 21:11:58 +0000 (0:00:03.510) 0:00:19.727 ********* 2025-08-29 21:14:50.195263 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 21:14:50.195270 | orchestrator | 2025-08-29 21:14:50.195276 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-08-29 21:14:50.195334 | orchestrator | Friday 29 August 2025 21:12:01 +0000 (0:00:03.555) 0:00:23.283 ********* 2025-08-29 21:14:50.195378 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-08-29 21:14:50.195387 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-08-29 21:14:50.195394 | orchestrator | 2025-08-29 21:14:50.195402 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-08-29 21:14:50.195801 | orchestrator | Friday 29 August 2025 21:12:09 +0000 (0:00:07.960) 0:00:31.244 ********* 2025-08-29 21:14:50.195839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.195862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.195871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.195879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.195886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.195898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.195923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.195934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.195940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.195946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.195954 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.195986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.195995 | orchestrator | 2025-08-29 21:14:50.196002 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 21:14:50.196010 | orchestrator | Friday 29 August 2025 21:12:11 +0000 (0:00:02.209) 0:00:33.453 ********* 2025-08-29 21:14:50.196017 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:14:50.196024 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:14:50.196031 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:14:50.196038 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:14:50.196044 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:14:50.196051 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:14:50.196058 | orchestrator | 2025-08-29 21:14:50.196065 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 21:14:50.196071 | orchestrator | Friday 29 August 2025 21:12:12 +0000 (0:00:00.526) 0:00:33.979 ********* 2025-08-29 21:14:50.196078 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:14:50.196085 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:14:50.196092 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:14:50.196102 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:14:50.196108 | orchestrator | 2025-08-29 21:14:50.196115 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-08-29 21:14:50.196122 | orchestrator | Friday 29 August 2025 21:12:13 +0000 (0:00:01.420) 0:00:35.400 ********* 2025-08-29 21:14:50.196129 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-08-29 21:14:50.196135 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-08-29 21:14:50.196142 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-08-29 21:14:50.196149 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-08-29 21:14:50.196155 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-08-29 21:14:50.196162 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-08-29 21:14:50.196169 | orchestrator | 2025-08-29 21:14:50.196175 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-08-29 21:14:50.196182 | orchestrator | Friday 29 August 2025 21:12:16 +0000 (0:00:03.125) 0:00:38.525 ********* 2025-08-29 21:14:50.196189 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 21:14:50.196201 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 21:14:50.196229 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 21:14:50.196237 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 21:14:50.196247 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 21:14:50.196254 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 21:14:50.196269 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 21:14:50.196292 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 21:14:50.196303 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 21:14:50.196310 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 21:14:50.196321 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 21:14:50.196328 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 21:14:50.196335 | orchestrator | 2025-08-29 21:14:50.196341 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-08-29 21:14:50.196348 | orchestrator | Friday 29 August 2025 21:12:22 +0000 (0:00:05.447) 0:00:43.973 ********* 2025-08-29 21:14:50.196354 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 21:14:50.196361 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 21:14:50.196368 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 21:14:50.196375 | orchestrator | 2025-08-29 21:14:50.196382 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-08-29 21:14:50.196388 | orchestrator | Friday 29 August 2025 21:12:24 +0000 (0:00:02.101) 0:00:46.075 ********* 2025-08-29 21:14:50.196414 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-08-29 21:14:50.196423 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-08-29 21:14:50.196430 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-08-29 21:14:50.196437 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 21:14:50.196444 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 21:14:50.196452 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 21:14:50.196459 | orchestrator | 2025-08-29 21:14:50.196466 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-08-29 21:14:50.196473 | orchestrator | Friday 29 August 2025 21:12:27 +0000 (0:00:02.808) 0:00:48.883 ********* 2025-08-29 21:14:50.196480 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-08-29 21:14:50.196487 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-08-29 21:14:50.196494 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-08-29 21:14:50.196502 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-08-29 21:14:50.196512 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-08-29 21:14:50.196519 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-08-29 21:14:50.196527 | orchestrator | 2025-08-29 21:14:50.196534 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-08-29 21:14:50.196546 | orchestrator | Friday 29 August 2025 21:12:28 +0000 (0:00:01.019) 0:00:49.903 ********* 2025-08-29 21:14:50.196554 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:14:50.196561 | orchestrator | 2025-08-29 21:14:50.196569 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-08-29 21:14:50.196576 | orchestrator | Friday 29 August 2025 21:12:28 +0000 (0:00:00.124) 0:00:50.027 ********* 2025-08-29 21:14:50.196583 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:14:50.196590 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:14:50.196598 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:14:50.196605 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:14:50.196613 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:14:50.196620 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:14:50.196628 | orchestrator | 2025-08-29 21:14:50.196635 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 21:14:50.196643 | orchestrator | Friday 29 August 2025 21:12:29 +0000 (0:00:01.031) 0:00:51.059 ********* 2025-08-29 21:14:50.196651 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:14:50.196660 | orchestrator | 2025-08-29 21:14:50.196667 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-08-29 21:14:50.196675 | orchestrator | Friday 29 August 2025 21:12:31 +0000 (0:00:01.753) 0:00:52.813 ********* 2025-08-29 21:14:50.196684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.196693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.196741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.196760 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.196770 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.196777 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.196784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.196812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.196828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.196835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.196843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.196851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.196858 | orchestrator | 2025-08-29 21:14:50.196865 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-08-29 21:14:50.196872 | orchestrator | Friday 29 August 2025 21:12:34 +0000 (0:00:03.094) 0:00:55.907 ********* 2025-08-29 21:14:50.196884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:14:50.196895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.196905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:14:50.196912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.196920 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:14:50.196927 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:14:50.196934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:14:50.196942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.196949 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:14:50.196962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.196976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.196984 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:14:50.196992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.196998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197004 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:14:50.197010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197031 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:14:50.197037 | orchestrator | 2025-08-29 21:14:50.197043 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-08-29 21:14:50.197048 | orchestrator | Friday 29 August 2025 21:12:36 +0000 (0:00:02.155) 0:00:58.063 ********* 2025-08-29 21:14:50.197056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:14:50.197062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197068 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:14:50.197075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:14:50.197082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197096 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:14:50.197107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:14:50.197117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197124 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:14:50.197132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197146 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:14:50.197153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197177 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:14:50.197186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197201 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:14:50.197208 | orchestrator | 2025-08-29 21:14:50.197215 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-08-29 21:14:50.197222 | orchestrator | Friday 29 August 2025 21:12:38 +0000 (0:00:01.571) 0:00:59.635 ********* 2025-08-29 21:14:50.197229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.197241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.197252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197263 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.197336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197373 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197381 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197397 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197409 | orchestrator | 2025-08-29 21:14:50.197416 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-08-29 21:14:50.197424 | orchestrator | Friday 29 August 2025 21:12:40 +0000 (0:00:02.712) 0:01:02.348 ********* 2025-08-29 21:14:50.197431 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 21:14:50.197438 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:14:50.197446 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 21:14:50.197453 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:14:50.197460 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 21:14:50.197467 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:14:50.197475 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 21:14:50.197482 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 21:14:50.197492 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 21:14:50.197500 | orchestrator | 2025-08-29 21:14:50.197506 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-08-29 21:14:50.197514 | orchestrator | Friday 29 August 2025 21:12:42 +0000 (0:00:01.596) 0:01:03.944 ********* 2025-08-29 21:14:50.197525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.197533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.197541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197566 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.197584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197596 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.197639 | orchestrator | 2025-08-29 21:14:50.197646 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-08-29 21:14:50.197653 | orchestrator | Friday 29 August 2025 21:12:51 +0000 (0:00:09.176) 0:01:13.121 ********* 2025-08-29 21:14:50.197659 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:14:50.197666 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:14:50.197672 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:14:50.197678 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:14:50.197685 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:14:50.197697 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:14:50.197704 | orchestrator | 2025-08-29 21:14:50.197710 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-08-29 21:14:50.197801 | orchestrator | Friday 29 August 2025 21:12:53 +0000 (0:00:02.417) 0:01:15.538 ********* 2025-08-29 21:14:50.197812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:14:50.197821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:14:50.197842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197850 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:14:50.197856 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:14:50.197870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 21:14:50.197883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197890 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:14:50.197898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197913 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:14:50.197924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197947 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:14:50.197954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 21:14:50.197969 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:14:50.197975 | orchestrator | 2025-08-29 21:14:50.197982 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-08-29 21:14:50.197989 | orchestrator | Friday 29 August 2025 21:12:54 +0000 (0:00:00.847) 0:01:16.386 ********* 2025-08-29 21:14:50.197996 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:14:50.198003 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:14:50.198009 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:14:50.198043 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:14:50.198052 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:14:50.198058 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:14:50.198065 | orchestrator | 2025-08-29 21:14:50.198071 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-08-29 21:14:50.198078 | orchestrator | Friday 29 August 2025 21:12:55 +0000 (0:00:00.660) 0:01:17.046 ********* 2025-08-29 21:14:50.198090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.198101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.198113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.198120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 21:14:50.198127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.198138 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.198152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.198159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.198167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.198174 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.198185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.198192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:14:50.198203 | orchestrator | 2025-08-29 21:14:50.198211 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 21:14:50.198218 | orchestrator | Friday 29 August 2025 21:12:57 +0000 (0:00:02.004) 0:01:19.050 ********* 2025-08-29 21:14:50.198225 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:14:50.198235 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:14:50.198242 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:14:50.198249 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:14:50.198256 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:14:50.198263 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:14:50.198270 | orchestrator | 2025-08-29 21:14:50.198277 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-08-29 21:14:50.198283 | orchestrator | Friday 29 August 2025 21:12:58 +0000 (0:00:00.592) 0:01:19.643 ********* 2025-08-29 21:14:50.198290 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:14:50.198297 | orchestrator | 2025-08-29 21:14:50.198303 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-08-29 21:14:50.198309 | orchestrator | Friday 29 August 2025 21:13:00 +0000 (0:00:02.106) 0:01:21.749 ********* 2025-08-29 21:14:50.198315 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:14:50.198321 | orchestrator | 2025-08-29 21:14:50.198328 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-08-29 21:14:50.198335 | orchestrator | Friday 29 August 2025 21:13:02 +0000 (0:00:02.287) 0:01:24.037 ********* 2025-08-29 21:14:50.198342 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:14:50.198349 | orchestrator | 2025-08-29 21:14:50.198356 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 21:14:50.198363 | orchestrator | Friday 29 August 2025 21:13:21 +0000 (0:00:19.207) 0:01:43.245 ********* 2025-08-29 21:14:50.198370 | orchestrator | 2025-08-29 21:14:50.198377 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 21:14:50.198384 | orchestrator | Friday 29 August 2025 21:13:21 +0000 (0:00:00.058) 0:01:43.303 ********* 2025-08-29 21:14:50.198391 | orchestrator | 2025-08-29 21:14:50.198398 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 21:14:50.198405 | orchestrator | Friday 29 August 2025 21:13:21 +0000 (0:00:00.060) 0:01:43.364 ********* 2025-08-29 21:14:50.198411 | orchestrator | 2025-08-29 21:14:50.198418 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 21:14:50.198425 | orchestrator | Friday 29 August 2025 21:13:21 +0000 (0:00:00.056) 0:01:43.421 ********* 2025-08-29 21:14:50.198432 | orchestrator | 2025-08-29 21:14:50.198439 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 21:14:50.198446 | orchestrator | Friday 29 August 2025 21:13:21 +0000 (0:00:00.059) 0:01:43.480 ********* 2025-08-29 21:14:50.198453 | orchestrator | 2025-08-29 21:14:50.198460 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 21:14:50.198467 | orchestrator | Friday 29 August 2025 21:13:21 +0000 (0:00:00.058) 0:01:43.539 ********* 2025-08-29 21:14:50.198474 | orchestrator | 2025-08-29 21:14:50.198481 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-08-29 21:14:50.198488 | orchestrator | Friday 29 August 2025 21:13:21 +0000 (0:00:00.060) 0:01:43.599 ********* 2025-08-29 21:14:50.198495 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:14:50.198502 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:14:50.198509 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:14:50.198516 | orchestrator | 2025-08-29 21:14:50.198523 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-08-29 21:14:50.198535 | orchestrator | Friday 29 August 2025 21:13:51 +0000 (0:00:29.080) 0:02:12.680 ********* 2025-08-29 21:14:50.198542 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:14:50.198549 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:14:50.198556 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:14:50.198563 | orchestrator | 2025-08-29 21:14:50.198570 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-08-29 21:14:50.198577 | orchestrator | Friday 29 August 2025 21:13:59 +0000 (0:00:07.955) 0:02:20.635 ********* 2025-08-29 21:14:50.198584 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:14:50.198591 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:14:50.198598 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:14:50.198605 | orchestrator | 2025-08-29 21:14:50.198612 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-08-29 21:14:50.198619 | orchestrator | Friday 29 August 2025 21:14:38 +0000 (0:00:39.171) 0:02:59.806 ********* 2025-08-29 21:14:50.198626 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:14:50.198633 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:14:50.198640 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:14:50.198647 | orchestrator | 2025-08-29 21:14:50.198654 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-08-29 21:14:50.198661 | orchestrator | Friday 29 August 2025 21:14:49 +0000 (0:00:11.114) 0:03:10.921 ********* 2025-08-29 21:14:50.198669 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:14:50.198676 | orchestrator | 2025-08-29 21:14:50.198683 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:14:50.198693 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 21:14:50.198701 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 21:14:50.198708 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 21:14:50.198715 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 21:14:50.198737 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 21:14:50.198747 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 21:14:50.198754 | orchestrator | 2025-08-29 21:14:50.198761 | orchestrator | 2025-08-29 21:14:50.198768 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:14:50.198775 | orchestrator | Friday 29 August 2025 21:14:49 +0000 (0:00:00.490) 0:03:11.411 ********* 2025-08-29 21:14:50.198782 | orchestrator | =============================================================================== 2025-08-29 21:14:50.198789 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 39.17s 2025-08-29 21:14:50.198796 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 29.08s 2025-08-29 21:14:50.198803 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.21s 2025-08-29 21:14:50.198810 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.11s 2025-08-29 21:14:50.198817 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.18s 2025-08-29 21:14:50.198824 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.96s 2025-08-29 21:14:50.198831 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 7.96s 2025-08-29 21:14:50.198838 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.73s 2025-08-29 21:14:50.198849 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.45s 2025-08-29 21:14:50.198856 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.56s 2025-08-29 21:14:50.198863 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.51s 2025-08-29 21:14:50.198870 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.46s 2025-08-29 21:14:50.198877 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.13s 2025-08-29 21:14:50.198884 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.09s 2025-08-29 21:14:50.198891 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.81s 2025-08-29 21:14:50.198898 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.75s 2025-08-29 21:14:50.198905 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.71s 2025-08-29 21:14:50.198912 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.42s 2025-08-29 21:14:50.198919 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.29s 2025-08-29 21:14:50.198926 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.21s 2025-08-29 21:14:50.198933 | orchestrator | 2025-08-29 21:14:50 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state STARTED 2025-08-29 21:14:50.198940 | orchestrator | 2025-08-29 21:14:50 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:50.198948 | orchestrator | 2025-08-29 21:14:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:53.239452 | orchestrator | 2025-08-29 21:14:53 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:53.240442 | orchestrator | 2025-08-29 21:14:53 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:14:53.240482 | orchestrator | 2025-08-29 21:14:53 | INFO  | Task 536d770f-e5b4-4ef0-a877-653d5fe08b40 is in state SUCCESS 2025-08-29 21:14:53.241198 | orchestrator | 2025-08-29 21:14:53.241229 | orchestrator | 2025-08-29 21:14:53.241241 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:14:53.241252 | orchestrator | 2025-08-29 21:14:53.241264 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:14:53.241275 | orchestrator | Friday 29 August 2025 21:13:58 +0000 (0:00:00.241) 0:00:00.241 ********* 2025-08-29 21:14:53.241286 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:14:53.241298 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:14:53.241309 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:14:53.241319 | orchestrator | 2025-08-29 21:14:53.241330 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:14:53.241341 | orchestrator | Friday 29 August 2025 21:13:58 +0000 (0:00:00.260) 0:00:00.502 ********* 2025-08-29 21:14:53.241352 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-08-29 21:14:53.241363 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-08-29 21:14:53.241374 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-08-29 21:14:53.241385 | orchestrator | 2025-08-29 21:14:53.241396 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-08-29 21:14:53.241406 | orchestrator | 2025-08-29 21:14:53.241417 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 21:14:53.241428 | orchestrator | Friday 29 August 2025 21:13:59 +0000 (0:00:00.467) 0:00:00.969 ********* 2025-08-29 21:14:53.241438 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:14:53.241449 | orchestrator | 2025-08-29 21:14:53.241460 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-08-29 21:14:53.241471 | orchestrator | Friday 29 August 2025 21:14:00 +0000 (0:00:00.722) 0:00:01.692 ********* 2025-08-29 21:14:53.241513 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-08-29 21:14:53.241525 | orchestrator | 2025-08-29 21:14:53.241535 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-08-29 21:14:53.241546 | orchestrator | Friday 29 August 2025 21:14:04 +0000 (0:00:03.914) 0:00:05.606 ********* 2025-08-29 21:14:53.241570 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-08-29 21:14:53.241582 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-08-29 21:14:53.241593 | orchestrator | 2025-08-29 21:14:53.241604 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-08-29 21:14:53.241615 | orchestrator | Friday 29 August 2025 21:14:10 +0000 (0:00:06.513) 0:00:12.120 ********* 2025-08-29 21:14:53.241626 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 21:14:53.241636 | orchestrator | 2025-08-29 21:14:53.241647 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-08-29 21:14:53.241657 | orchestrator | Friday 29 August 2025 21:14:13 +0000 (0:00:03.312) 0:00:15.432 ********* 2025-08-29 21:14:53.241668 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 21:14:53.241679 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 21:14:53.241690 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 21:14:53.241700 | orchestrator | 2025-08-29 21:14:53.241711 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-08-29 21:14:53.241762 | orchestrator | Friday 29 August 2025 21:14:21 +0000 (0:00:07.730) 0:00:23.163 ********* 2025-08-29 21:14:53.241773 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 21:14:53.241784 | orchestrator | 2025-08-29 21:14:53.241796 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-08-29 21:14:53.241808 | orchestrator | Friday 29 August 2025 21:14:24 +0000 (0:00:03.273) 0:00:26.436 ********* 2025-08-29 21:14:53.241821 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 21:14:53.241833 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 21:14:53.241846 | orchestrator | 2025-08-29 21:14:53.241858 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-08-29 21:14:53.241870 | orchestrator | Friday 29 August 2025 21:14:32 +0000 (0:00:07.583) 0:00:34.020 ********* 2025-08-29 21:14:53.241882 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-08-29 21:14:53.241894 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-08-29 21:14:53.241906 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-08-29 21:14:53.241917 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-08-29 21:14:53.241930 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-08-29 21:14:53.241942 | orchestrator | 2025-08-29 21:14:53.241954 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 21:14:53.241966 | orchestrator | Friday 29 August 2025 21:14:48 +0000 (0:00:15.918) 0:00:49.938 ********* 2025-08-29 21:14:53.241978 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:14:53.241990 | orchestrator | 2025-08-29 21:14:53.242003 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-08-29 21:14:53.242058 | orchestrator | Friday 29 August 2025 21:14:48 +0000 (0:00:00.511) 0:00:50.450 ********* 2025-08-29 21:14:53.242074 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-08-29 21:14:53.242122 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1756502090.4298615-6627-48756498236565/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1756502090.4298615-6627-48756498236565/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1756502090.4298615-6627-48756498236565/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_7fstn9d5/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_7fstn9d5/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_7fstn9d5/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_7fstn9d5/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-08-29 21:14:53.242150 | orchestrator | 2025-08-29 21:14:53.242162 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:14:53.242173 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-08-29 21:14:53.242188 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:14:53.242208 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:14:53.242226 | orchestrator | 2025-08-29 21:14:53.242447 | orchestrator | 2025-08-29 21:14:53.242468 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:14:53.242497 | orchestrator | Friday 29 August 2025 21:14:52 +0000 (0:00:03.520) 0:00:53.971 ********* 2025-08-29 21:14:53.242519 | orchestrator | =============================================================================== 2025-08-29 21:14:53.242530 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.92s 2025-08-29 21:14:53.242541 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.73s 2025-08-29 21:14:53.242552 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.58s 2025-08-29 21:14:53.242563 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.51s 2025-08-29 21:14:53.242574 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.91s 2025-08-29 21:14:53.242585 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.52s 2025-08-29 21:14:53.242595 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.31s 2025-08-29 21:14:53.242606 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.27s 2025-08-29 21:14:53.242616 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.72s 2025-08-29 21:14:53.242627 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.51s 2025-08-29 21:14:53.242638 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-08-29 21:14:53.242648 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-08-29 21:14:53.242659 | orchestrator | 2025-08-29 21:14:53 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:53.242670 | orchestrator | 2025-08-29 21:14:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:56.285180 | orchestrator | 2025-08-29 21:14:56 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:56.288967 | orchestrator | 2025-08-29 21:14:56 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:14:56.296872 | orchestrator | 2025-08-29 21:14:56 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:56.296908 | orchestrator | 2025-08-29 21:14:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:14:59.345835 | orchestrator | 2025-08-29 21:14:59 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:14:59.345929 | orchestrator | 2025-08-29 21:14:59 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:14:59.349250 | orchestrator | 2025-08-29 21:14:59 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:14:59.350886 | orchestrator | 2025-08-29 21:14:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:02.387999 | orchestrator | 2025-08-29 21:15:02 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:02.390165 | orchestrator | 2025-08-29 21:15:02 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:02.392518 | orchestrator | 2025-08-29 21:15:02 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:02.392551 | orchestrator | 2025-08-29 21:15:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:05.444460 | orchestrator | 2025-08-29 21:15:05 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:05.444538 | orchestrator | 2025-08-29 21:15:05 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:05.444771 | orchestrator | 2025-08-29 21:15:05 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:05.444794 | orchestrator | 2025-08-29 21:15:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:08.475559 | orchestrator | 2025-08-29 21:15:08 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:08.477507 | orchestrator | 2025-08-29 21:15:08 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:08.479221 | orchestrator | 2025-08-29 21:15:08 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:08.479260 | orchestrator | 2025-08-29 21:15:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:11.505367 | orchestrator | 2025-08-29 21:15:11 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:11.507397 | orchestrator | 2025-08-29 21:15:11 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:11.509982 | orchestrator | 2025-08-29 21:15:11 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:11.510108 | orchestrator | 2025-08-29 21:15:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:14.542357 | orchestrator | 2025-08-29 21:15:14 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:14.545280 | orchestrator | 2025-08-29 21:15:14 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:14.547593 | orchestrator | 2025-08-29 21:15:14 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:14.547631 | orchestrator | 2025-08-29 21:15:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:17.576305 | orchestrator | 2025-08-29 21:15:17 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:17.576971 | orchestrator | 2025-08-29 21:15:17 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:17.577616 | orchestrator | 2025-08-29 21:15:17 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:17.577632 | orchestrator | 2025-08-29 21:15:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:20.618762 | orchestrator | 2025-08-29 21:15:20 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:20.620599 | orchestrator | 2025-08-29 21:15:20 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:20.623176 | orchestrator | 2025-08-29 21:15:20 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:20.623207 | orchestrator | 2025-08-29 21:15:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:23.664507 | orchestrator | 2025-08-29 21:15:23 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:23.665508 | orchestrator | 2025-08-29 21:15:23 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:23.667515 | orchestrator | 2025-08-29 21:15:23 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:23.667539 | orchestrator | 2025-08-29 21:15:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:26.710875 | orchestrator | 2025-08-29 21:15:26 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:26.711957 | orchestrator | 2025-08-29 21:15:26 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:26.713837 | orchestrator | 2025-08-29 21:15:26 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:26.713946 | orchestrator | 2025-08-29 21:15:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:29.759204 | orchestrator | 2025-08-29 21:15:29 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:29.760611 | orchestrator | 2025-08-29 21:15:29 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:29.762227 | orchestrator | 2025-08-29 21:15:29 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:29.762260 | orchestrator | 2025-08-29 21:15:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:32.805078 | orchestrator | 2025-08-29 21:15:32 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:32.806717 | orchestrator | 2025-08-29 21:15:32 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:32.809890 | orchestrator | 2025-08-29 21:15:32 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:32.809969 | orchestrator | 2025-08-29 21:15:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:35.850884 | orchestrator | 2025-08-29 21:15:35 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:35.852077 | orchestrator | 2025-08-29 21:15:35 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:35.853916 | orchestrator | 2025-08-29 21:15:35 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:35.853955 | orchestrator | 2025-08-29 21:15:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:38.893938 | orchestrator | 2025-08-29 21:15:38 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:38.895575 | orchestrator | 2025-08-29 21:15:38 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:38.897177 | orchestrator | 2025-08-29 21:15:38 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:38.897361 | orchestrator | 2025-08-29 21:15:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:41.925622 | orchestrator | 2025-08-29 21:15:41 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:41.926901 | orchestrator | 2025-08-29 21:15:41 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:41.927803 | orchestrator | 2025-08-29 21:15:41 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:41.927890 | orchestrator | 2025-08-29 21:15:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:44.967010 | orchestrator | 2025-08-29 21:15:44 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:44.967806 | orchestrator | 2025-08-29 21:15:44 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:44.969391 | orchestrator | 2025-08-29 21:15:44 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:44.969427 | orchestrator | 2025-08-29 21:15:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:48.006069 | orchestrator | 2025-08-29 21:15:48 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:48.006612 | orchestrator | 2025-08-29 21:15:48 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:48.008225 | orchestrator | 2025-08-29 21:15:48 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:48.008494 | orchestrator | 2025-08-29 21:15:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:51.055542 | orchestrator | 2025-08-29 21:15:51 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:51.057482 | orchestrator | 2025-08-29 21:15:51 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:51.059869 | orchestrator | 2025-08-29 21:15:51 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:51.060161 | orchestrator | 2025-08-29 21:15:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:54.107083 | orchestrator | 2025-08-29 21:15:54 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:54.108769 | orchestrator | 2025-08-29 21:15:54 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:54.110220 | orchestrator | 2025-08-29 21:15:54 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:54.110475 | orchestrator | 2025-08-29 21:15:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:15:57.143438 | orchestrator | 2025-08-29 21:15:57 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:15:57.143527 | orchestrator | 2025-08-29 21:15:57 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:15:57.144354 | orchestrator | 2025-08-29 21:15:57 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:15:57.144380 | orchestrator | 2025-08-29 21:15:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:00.183831 | orchestrator | 2025-08-29 21:16:00 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:16:00.187162 | orchestrator | 2025-08-29 21:16:00 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:00.188754 | orchestrator | 2025-08-29 21:16:00 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:00.189022 | orchestrator | 2025-08-29 21:16:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:03.228451 | orchestrator | 2025-08-29 21:16:03 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:16:03.230289 | orchestrator | 2025-08-29 21:16:03 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:03.232956 | orchestrator | 2025-08-29 21:16:03 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:03.233256 | orchestrator | 2025-08-29 21:16:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:06.280328 | orchestrator | 2025-08-29 21:16:06 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:16:06.282251 | orchestrator | 2025-08-29 21:16:06 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:06.284388 | orchestrator | 2025-08-29 21:16:06 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:06.284416 | orchestrator | 2025-08-29 21:16:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:09.325356 | orchestrator | 2025-08-29 21:16:09 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:16:09.326835 | orchestrator | 2025-08-29 21:16:09 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:09.328483 | orchestrator | 2025-08-29 21:16:09 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:09.328656 | orchestrator | 2025-08-29 21:16:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:12.372080 | orchestrator | 2025-08-29 21:16:12 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:16:12.373846 | orchestrator | 2025-08-29 21:16:12 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:12.375974 | orchestrator | 2025-08-29 21:16:12 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:12.376101 | orchestrator | 2025-08-29 21:16:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:15.420207 | orchestrator | 2025-08-29 21:16:15 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:16:15.421365 | orchestrator | 2025-08-29 21:16:15 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:15.423773 | orchestrator | 2025-08-29 21:16:15 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:15.423810 | orchestrator | 2025-08-29 21:16:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:18.468141 | orchestrator | 2025-08-29 21:16:18 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:16:18.469280 | orchestrator | 2025-08-29 21:16:18 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:18.471929 | orchestrator | 2025-08-29 21:16:18 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:18.471980 | orchestrator | 2025-08-29 21:16:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:21.510942 | orchestrator | 2025-08-29 21:16:21 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:16:21.512720 | orchestrator | 2025-08-29 21:16:21 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:21.515218 | orchestrator | 2025-08-29 21:16:21 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:21.515708 | orchestrator | 2025-08-29 21:16:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:24.557990 | orchestrator | 2025-08-29 21:16:24 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state STARTED 2025-08-29 21:16:24.559572 | orchestrator | 2025-08-29 21:16:24 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:24.561225 | orchestrator | 2025-08-29 21:16:24 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:24.561643 | orchestrator | 2025-08-29 21:16:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:27.611872 | orchestrator | 2025-08-29 21:16:27 | INFO  | Task fe85495e-5d78-4a18-b4e5-33baae134617 is in state SUCCESS 2025-08-29 21:16:27.615205 | orchestrator | 2025-08-29 21:16:27 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:27.617214 | orchestrator | 2025-08-29 21:16:27 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:27.617242 | orchestrator | 2025-08-29 21:16:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:30.649834 | orchestrator | 2025-08-29 21:16:30 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:30.652040 | orchestrator | 2025-08-29 21:16:30 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:30.652442 | orchestrator | 2025-08-29 21:16:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:33.689812 | orchestrator | 2025-08-29 21:16:33 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:33.691274 | orchestrator | 2025-08-29 21:16:33 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:33.691320 | orchestrator | 2025-08-29 21:16:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:36.733962 | orchestrator | 2025-08-29 21:16:36 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:36.735382 | orchestrator | 2025-08-29 21:16:36 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:36.735724 | orchestrator | 2025-08-29 21:16:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:39.764246 | orchestrator | 2025-08-29 21:16:39 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:39.765463 | orchestrator | 2025-08-29 21:16:39 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:39.765496 | orchestrator | 2025-08-29 21:16:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:42.795873 | orchestrator | 2025-08-29 21:16:42 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:42.796929 | orchestrator | 2025-08-29 21:16:42 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:42.797423 | orchestrator | 2025-08-29 21:16:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:45.839929 | orchestrator | 2025-08-29 21:16:45 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:45.841467 | orchestrator | 2025-08-29 21:16:45 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:45.841511 | orchestrator | 2025-08-29 21:16:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:48.882418 | orchestrator | 2025-08-29 21:16:48 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:48.883341 | orchestrator | 2025-08-29 21:16:48 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:48.883746 | orchestrator | 2025-08-29 21:16:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:51.922091 | orchestrator | 2025-08-29 21:16:51 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:51.924054 | orchestrator | 2025-08-29 21:16:51 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:51.924108 | orchestrator | 2025-08-29 21:16:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:54.980656 | orchestrator | 2025-08-29 21:16:54 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:54.982497 | orchestrator | 2025-08-29 21:16:54 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:54.983087 | orchestrator | 2025-08-29 21:16:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:16:58.048470 | orchestrator | 2025-08-29 21:16:58 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:16:58.056197 | orchestrator | 2025-08-29 21:16:58 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:16:58.056228 | orchestrator | 2025-08-29 21:16:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:01.098276 | orchestrator | 2025-08-29 21:17:01 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:17:01.098385 | orchestrator | 2025-08-29 21:17:01 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:01.099672 | orchestrator | 2025-08-29 21:17:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:04.140780 | orchestrator | 2025-08-29 21:17:04 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state STARTED 2025-08-29 21:17:04.142343 | orchestrator | 2025-08-29 21:17:04 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:04.142427 | orchestrator | 2025-08-29 21:17:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:07.183490 | orchestrator | 2025-08-29 21:17:07 | INFO  | Task 8d61b7b5-ce69-4cf2-83da-5a48aed6ace9 is in state SUCCESS 2025-08-29 21:17:07.184794 | orchestrator | 2025-08-29 21:17:07.184949 | orchestrator | 2025-08-29 21:17:07.184965 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:17:07.184978 | orchestrator | 2025-08-29 21:17:07.184989 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:17:07.185000 | orchestrator | Friday 29 August 2025 21:13:33 +0000 (0:00:00.133) 0:00:00.133 ********* 2025-08-29 21:17:07.185011 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:17:07.185023 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:17:07.185034 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:17:07.185045 | orchestrator | 2025-08-29 21:17:07.185056 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:17:07.185067 | orchestrator | Friday 29 August 2025 21:13:33 +0000 (0:00:00.234) 0:00:00.368 ********* 2025-08-29 21:17:07.185078 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 21:17:07.185089 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 21:17:07.185100 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 21:17:07.185110 | orchestrator | 2025-08-29 21:17:07.185121 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-08-29 21:17:07.185132 | orchestrator | 2025-08-29 21:17:07.185143 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-08-29 21:17:07.185154 | orchestrator | Friday 29 August 2025 21:13:34 +0000 (0:00:00.543) 0:00:00.911 ********* 2025-08-29 21:17:07.185164 | orchestrator | 2025-08-29 21:17:07.185175 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-08-29 21:17:07.185185 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:17:07.185196 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:17:07.185207 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:17:07.185217 | orchestrator | 2025-08-29 21:17:07.185228 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:17:07.185239 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:17:07.185254 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:17:07.185267 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:17:07.185279 | orchestrator | 2025-08-29 21:17:07.185293 | orchestrator | 2025-08-29 21:17:07.185305 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:17:07.185318 | orchestrator | Friday 29 August 2025 21:16:25 +0000 (0:02:51.837) 0:02:52.749 ********* 2025-08-29 21:17:07.185331 | orchestrator | =============================================================================== 2025-08-29 21:17:07.185343 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 171.84s 2025-08-29 21:17:07.185356 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-08-29 21:17:07.185368 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.23s 2025-08-29 21:17:07.185381 | orchestrator | 2025-08-29 21:17:07.185394 | orchestrator | 2025-08-29 21:17:07.185407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:17:07.185418 | orchestrator | 2025-08-29 21:17:07.185428 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:17:07.185439 | orchestrator | Friday 29 August 2025 21:14:53 +0000 (0:00:00.264) 0:00:00.264 ********* 2025-08-29 21:17:07.185450 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:17:07.185461 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:17:07.185471 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:17:07.185482 | orchestrator | 2025-08-29 21:17:07.185520 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:17:07.185567 | orchestrator | Friday 29 August 2025 21:14:54 +0000 (0:00:00.344) 0:00:00.608 ********* 2025-08-29 21:17:07.185580 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-08-29 21:17:07.185591 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-08-29 21:17:07.185602 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-08-29 21:17:07.185612 | orchestrator | 2025-08-29 21:17:07.185623 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-08-29 21:17:07.185634 | orchestrator | 2025-08-29 21:17:07.185644 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 21:17:07.185655 | orchestrator | Friday 29 August 2025 21:14:54 +0000 (0:00:00.397) 0:00:01.006 ********* 2025-08-29 21:17:07.185666 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:17:07.185677 | orchestrator | 2025-08-29 21:17:07.185688 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-08-29 21:17:07.185698 | orchestrator | Friday 29 August 2025 21:14:55 +0000 (0:00:00.519) 0:00:01.526 ********* 2025-08-29 21:17:07.185713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.185746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.185759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.185770 | orchestrator | 2025-08-29 21:17:07.185781 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-08-29 21:17:07.185792 | orchestrator | Friday 29 August 2025 21:14:56 +0000 (0:00:00.817) 0:00:02.344 ********* 2025-08-29 21:17:07.185803 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-08-29 21:17:07.185814 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-08-29 21:17:07.185825 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 21:17:07.185836 | orchestrator | 2025-08-29 21:17:07.185847 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 21:17:07.185866 | orchestrator | Friday 29 August 2025 21:14:56 +0000 (0:00:00.804) 0:00:03.149 ********* 2025-08-29 21:17:07.185877 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:17:07.185888 | orchestrator | 2025-08-29 21:17:07.185898 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-08-29 21:17:07.185909 | orchestrator | Friday 29 August 2025 21:14:57 +0000 (0:00:00.637) 0:00:03.786 ********* 2025-08-29 21:17:07.185926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.185938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.185957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.185969 | orchestrator | 2025-08-29 21:17:07.185980 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-08-29 21:17:07.185990 | orchestrator | Friday 29 August 2025 21:14:58 +0000 (0:00:01.381) 0:00:05.168 ********* 2025-08-29 21:17:07.186002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 21:17:07.186013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 21:17:07.186094 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:17:07.186106 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:17:07.186117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 21:17:07.186129 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:17:07.186140 | orchestrator | 2025-08-29 21:17:07.186151 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-08-29 21:17:07.186162 | orchestrator | Friday 29 August 2025 21:14:59 +0000 (0:00:00.342) 0:00:05.511 ********* 2025-08-29 21:17:07.186178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 21:17:07.186191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 21:17:07.186202 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:17:07.186214 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:17:07.186233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 21:17:07.186245 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:17:07.186256 | orchestrator | 2025-08-29 21:17:07.186267 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-08-29 21:17:07.186278 | orchestrator | Friday 29 August 2025 21:14:59 +0000 (0:00:00.769) 0:00:06.280 ********* 2025-08-29 21:17:07.186289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.186307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.186324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.186336 | orchestrator | 2025-08-29 21:17:07.186347 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-08-29 21:17:07.186357 | orchestrator | Friday 29 August 2025 21:15:01 +0000 (0:00:01.277) 0:00:07.557 ********* 2025-08-29 21:17:07.186368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.186387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.186400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.186417 | orchestrator | 2025-08-29 21:17:07.186428 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-08-29 21:17:07.186439 | orchestrator | Friday 29 August 2025 21:15:02 +0000 (0:00:01.445) 0:00:09.003 ********* 2025-08-29 21:17:07.186450 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:17:07.186461 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:17:07.186472 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:17:07.186483 | orchestrator | 2025-08-29 21:17:07.186494 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-08-29 21:17:07.186505 | orchestrator | Friday 29 August 2025 21:15:03 +0000 (0:00:00.449) 0:00:09.452 ********* 2025-08-29 21:17:07.186515 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 21:17:07.186526 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 21:17:07.186560 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 21:17:07.186572 | orchestrator | 2025-08-29 21:17:07.186582 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-08-29 21:17:07.186593 | orchestrator | Friday 29 August 2025 21:15:04 +0000 (0:00:01.311) 0:00:10.763 ********* 2025-08-29 21:17:07.186604 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 21:17:07.186615 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 21:17:07.186626 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 21:17:07.186637 | orchestrator | 2025-08-29 21:17:07.186648 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-08-29 21:17:07.186659 | orchestrator | Friday 29 August 2025 21:15:05 +0000 (0:00:01.235) 0:00:11.999 ********* 2025-08-29 21:17:07.186697 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 21:17:07.186709 | orchestrator | 2025-08-29 21:17:07.186720 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-08-29 21:17:07.186735 | orchestrator | Friday 29 August 2025 21:15:06 +0000 (0:00:00.669) 0:00:12.668 ********* 2025-08-29 21:17:07.186747 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-08-29 21:17:07.186757 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-08-29 21:17:07.186768 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:17:07.186779 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:17:07.186790 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:17:07.186801 | orchestrator | 2025-08-29 21:17:07.186812 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-08-29 21:17:07.186822 | orchestrator | Friday 29 August 2025 21:15:07 +0000 (0:00:00.649) 0:00:13.318 ********* 2025-08-29 21:17:07.186833 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:17:07.186844 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:17:07.186855 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:17:07.186866 | orchestrator | 2025-08-29 21:17:07.186876 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-08-29 21:17:07.186887 | orchestrator | Friday 29 August 2025 21:15:07 +0000 (0:00:00.411) 0:00:13.729 ********* 2025-08-29 21:17:07.186899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1069109, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2287238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.186924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1069109, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2287238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.186937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1069109, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2287238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.186949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1069197, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.240902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.186962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1069197, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.240902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.186978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1069197, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.240902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.186990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1069128, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2312374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1069128, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2312374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1069128, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2312374, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1069200, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.242724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1069200, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.242724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1069200, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.242724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1069161, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2358544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1069161, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2358544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1069161, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2358544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1069183, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.239767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1069183, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.239767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1069183, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.239767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1069106, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2261055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1069106, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2261055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1069106, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2261055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1069118, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.229447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1069118, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.229447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1069118, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.229447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1069134, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2315345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1069134, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2315345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1069134, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2315345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1069173, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2378535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1069173, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2378535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1069173, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2378535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1069192, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2404485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1069192, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2404485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1069192, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2404485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1069122, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2302132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1069122, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2302132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1069122, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2302132, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1069180, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.238796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1069180, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.238796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1069180, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.238796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1069164, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2368805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1069164, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2368805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1069164, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2368805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1069153, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.235334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1069153, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.235334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1069153, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.235334, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1069150, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.23383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1069150, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.23383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1069150, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.23383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1069175, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2384305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1069175, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2384305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1069137, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.232983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1069175, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2384305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1069137, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.232983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1069189, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2401671, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1069137, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.232983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1069189, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2401671, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1070121, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3939273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1070121, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3939273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1069189, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2401671, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1069247, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2547493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1069247, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2547493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1070121, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3939273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.187992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1069230, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.245843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1069230, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.245843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1069247, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2547493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1069303, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2581933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1069303, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2581933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1069230, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.245843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1069215, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2439315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1069215, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2439315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1069303, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2581933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1070092, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3847811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1070092, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3847811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1069215, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2439315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1069306, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2635381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1069306, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2635381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1070092, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3847811, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1070096, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3857262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1070096, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3857262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1069306, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2635381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1070114, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3917265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1070114, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3917265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1070096, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3857262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1069344, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3827262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1069344, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3827262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1070114, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3917265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1069295, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2575696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1069295, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2575696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1069344, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3827262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1069244, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.248744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1069244, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.248744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1069295, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2575696, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1069276, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2570639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1069276, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2570639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1069233, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.247361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1069244, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.248744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1069233, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.247361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1069300, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2577915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1069300, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2577915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1069276, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2570639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1070108, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.390326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1070108, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.390326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1069233, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.247361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1070099, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3877263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1070099, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3877263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1069300, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2577915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1069218, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.24441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.188995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1069218, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.24441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1070108, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.390326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1069224, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.245539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1069224, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.245539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1070099, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3877263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1069336, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2640493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1069336, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2640493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1069218, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.24441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1070098, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3857262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1070098, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3857262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1069224, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.245539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1069336, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.2640493, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1070098, 'dev': 114, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756499186.3857262, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 21:17:07.189199 | orchestrator | 2025-08-29 21:17:07.189211 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-08-29 21:17:07.189222 | orchestrator | Friday 29 August 2025 21:15:47 +0000 (0:00:39.652) 0:00:53.381 ********* 2025-08-29 21:17:07.189240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.189259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.189270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 21:17:07.189282 | orchestrator | 2025-08-29 21:17:07.189293 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-08-29 21:17:07.189303 | orchestrator | Friday 29 August 2025 21:15:48 +0000 (0:00:01.186) 0:00:54.567 ********* 2025-08-29 21:17:07.189314 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:17:07.189326 | orchestrator | 2025-08-29 21:17:07.189337 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-08-29 21:17:07.189347 | orchestrator | Friday 29 August 2025 21:15:50 +0000 (0:00:02.342) 0:00:56.910 ********* 2025-08-29 21:17:07.189358 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:17:07.189369 | orchestrator | 2025-08-29 21:17:07.189380 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 21:17:07.189390 | orchestrator | Friday 29 August 2025 21:15:52 +0000 (0:00:02.254) 0:00:59.165 ********* 2025-08-29 21:17:07.189401 | orchestrator | 2025-08-29 21:17:07.189412 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 21:17:07.189422 | orchestrator | Friday 29 August 2025 21:15:53 +0000 (0:00:00.225) 0:00:59.391 ********* 2025-08-29 21:17:07.189433 | orchestrator | 2025-08-29 21:17:07.189448 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 21:17:07.189460 | orchestrator | Friday 29 August 2025 21:15:53 +0000 (0:00:00.067) 0:00:59.458 ********* 2025-08-29 21:17:07.189471 | orchestrator | 2025-08-29 21:17:07.189482 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-08-29 21:17:07.189492 | orchestrator | Friday 29 August 2025 21:15:53 +0000 (0:00:00.064) 0:00:59.523 ********* 2025-08-29 21:17:07.189503 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:17:07.189514 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:17:07.189525 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:17:07.189553 | orchestrator | 2025-08-29 21:17:07.189564 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-08-29 21:17:07.189575 | orchestrator | Friday 29 August 2025 21:16:00 +0000 (0:00:06.872) 0:01:06.396 ********* 2025-08-29 21:17:07.189586 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:17:07.189597 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:17:07.189608 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-08-29 21:17:07.189625 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-08-29 21:17:07.189637 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:17:07.189648 | orchestrator | 2025-08-29 21:17:07.189659 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-08-29 21:17:07.189670 | orchestrator | Friday 29 August 2025 21:16:26 +0000 (0:00:26.809) 0:01:33.205 ********* 2025-08-29 21:17:07.189681 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:17:07.189692 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:17:07.189702 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:17:07.189713 | orchestrator | 2025-08-29 21:17:07.189724 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-08-29 21:17:07.189735 | orchestrator | Friday 29 August 2025 21:17:00 +0000 (0:00:33.340) 0:02:06.546 ********* 2025-08-29 21:17:07.189745 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:17:07.189756 | orchestrator | 2025-08-29 21:17:07.189767 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-08-29 21:17:07.189778 | orchestrator | Friday 29 August 2025 21:17:02 +0000 (0:00:02.138) 0:02:08.684 ********* 2025-08-29 21:17:07.189789 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:17:07.189806 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:17:07.189817 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:17:07.189828 | orchestrator | 2025-08-29 21:17:07.189839 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-08-29 21:17:07.189850 | orchestrator | Friday 29 August 2025 21:17:02 +0000 (0:00:00.556) 0:02:09.240 ********* 2025-08-29 21:17:07.189862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-08-29 21:17:07.189874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-08-29 21:17:07.189886 | orchestrator | 2025-08-29 21:17:07.189897 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-08-29 21:17:07.189908 | orchestrator | Friday 29 August 2025 21:17:05 +0000 (0:00:02.531) 0:02:11.772 ********* 2025-08-29 21:17:07.189918 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:17:07.189929 | orchestrator | 2025-08-29 21:17:07.189940 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:17:07.189951 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 21:17:07.189963 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 21:17:07.189974 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 21:17:07.189985 | orchestrator | 2025-08-29 21:17:07.189996 | orchestrator | 2025-08-29 21:17:07.190007 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:17:07.190046 | orchestrator | Friday 29 August 2025 21:17:05 +0000 (0:00:00.350) 0:02:12.122 ********* 2025-08-29 21:17:07.190060 | orchestrator | =============================================================================== 2025-08-29 21:17:07.190071 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.65s 2025-08-29 21:17:07.190082 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 33.34s 2025-08-29 21:17:07.190100 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.81s 2025-08-29 21:17:07.190111 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.87s 2025-08-29 21:17:07.190121 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.53s 2025-08-29 21:17:07.190132 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.34s 2025-08-29 21:17:07.190143 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.25s 2025-08-29 21:17:07.190153 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.14s 2025-08-29 21:17:07.190165 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.45s 2025-08-29 21:17:07.190210 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.38s 2025-08-29 21:17:07.190222 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.31s 2025-08-29 21:17:07.190233 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.28s 2025-08-29 21:17:07.190243 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.24s 2025-08-29 21:17:07.190254 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.19s 2025-08-29 21:17:07.190265 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.82s 2025-08-29 21:17:07.190275 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.80s 2025-08-29 21:17:07.190286 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.77s 2025-08-29 21:17:07.190297 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.67s 2025-08-29 21:17:07.190307 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.65s 2025-08-29 21:17:07.190318 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.64s 2025-08-29 21:17:07.190329 | orchestrator | 2025-08-29 21:17:07 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:07.190340 | orchestrator | 2025-08-29 21:17:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:10.228251 | orchestrator | 2025-08-29 21:17:10 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:10.228356 | orchestrator | 2025-08-29 21:17:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:13.260965 | orchestrator | 2025-08-29 21:17:13 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:13.261057 | orchestrator | 2025-08-29 21:17:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:16.296492 | orchestrator | 2025-08-29 21:17:16 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:16.296682 | orchestrator | 2025-08-29 21:17:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:19.328783 | orchestrator | 2025-08-29 21:17:19 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:19.328893 | orchestrator | 2025-08-29 21:17:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:22.361012 | orchestrator | 2025-08-29 21:17:22 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:22.361108 | orchestrator | 2025-08-29 21:17:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:25.402658 | orchestrator | 2025-08-29 21:17:25 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:25.402756 | orchestrator | 2025-08-29 21:17:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:28.434652 | orchestrator | 2025-08-29 21:17:28 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:28.434761 | orchestrator | 2025-08-29 21:17:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:31.466575 | orchestrator | 2025-08-29 21:17:31 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:31.466672 | orchestrator | 2025-08-29 21:17:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:34.506995 | orchestrator | 2025-08-29 21:17:34 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:34.507096 | orchestrator | 2025-08-29 21:17:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:37.548917 | orchestrator | 2025-08-29 21:17:37 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:37.549026 | orchestrator | 2025-08-29 21:17:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:40.594550 | orchestrator | 2025-08-29 21:17:40 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:40.594651 | orchestrator | 2025-08-29 21:17:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:43.645591 | orchestrator | 2025-08-29 21:17:43 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:43.645701 | orchestrator | 2025-08-29 21:17:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:46.685128 | orchestrator | 2025-08-29 21:17:46 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:46.685228 | orchestrator | 2025-08-29 21:17:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:49.722538 | orchestrator | 2025-08-29 21:17:49 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:49.722651 | orchestrator | 2025-08-29 21:17:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:52.756457 | orchestrator | 2025-08-29 21:17:52 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:52.756597 | orchestrator | 2025-08-29 21:17:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:55.802935 | orchestrator | 2025-08-29 21:17:55 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:55.803025 | orchestrator | 2025-08-29 21:17:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:17:58.853662 | orchestrator | 2025-08-29 21:17:58 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:17:58.853769 | orchestrator | 2025-08-29 21:17:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:01.899251 | orchestrator | 2025-08-29 21:18:01 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:01.899357 | orchestrator | 2025-08-29 21:18:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:04.942856 | orchestrator | 2025-08-29 21:18:04 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:04.942984 | orchestrator | 2025-08-29 21:18:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:07.982770 | orchestrator | 2025-08-29 21:18:07 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:07.982878 | orchestrator | 2025-08-29 21:18:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:11.016569 | orchestrator | 2025-08-29 21:18:11 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:11.016673 | orchestrator | 2025-08-29 21:18:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:14.056298 | orchestrator | 2025-08-29 21:18:14 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:14.056411 | orchestrator | 2025-08-29 21:18:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:17.101341 | orchestrator | 2025-08-29 21:18:17 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:17.101441 | orchestrator | 2025-08-29 21:18:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:20.144190 | orchestrator | 2025-08-29 21:18:20 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:20.144310 | orchestrator | 2025-08-29 21:18:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:23.189881 | orchestrator | 2025-08-29 21:18:23 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:23.189979 | orchestrator | 2025-08-29 21:18:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:26.232024 | orchestrator | 2025-08-29 21:18:26 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:26.232151 | orchestrator | 2025-08-29 21:18:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:29.274713 | orchestrator | 2025-08-29 21:18:29 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:29.274812 | orchestrator | 2025-08-29 21:18:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:32.315001 | orchestrator | 2025-08-29 21:18:32 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:32.315111 | orchestrator | 2025-08-29 21:18:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:35.364800 | orchestrator | 2025-08-29 21:18:35 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:35.365089 | orchestrator | 2025-08-29 21:18:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:38.402326 | orchestrator | 2025-08-29 21:18:38 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:38.402395 | orchestrator | 2025-08-29 21:18:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:41.442134 | orchestrator | 2025-08-29 21:18:41 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:41.442227 | orchestrator | 2025-08-29 21:18:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:44.480928 | orchestrator | 2025-08-29 21:18:44 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:44.481046 | orchestrator | 2025-08-29 21:18:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:47.529674 | orchestrator | 2025-08-29 21:18:47 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:47.529821 | orchestrator | 2025-08-29 21:18:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:50.564671 | orchestrator | 2025-08-29 21:18:50 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:50.564781 | orchestrator | 2025-08-29 21:18:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:53.613331 | orchestrator | 2025-08-29 21:18:53 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:53.613429 | orchestrator | 2025-08-29 21:18:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:56.658615 | orchestrator | 2025-08-29 21:18:56 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:56.659504 | orchestrator | 2025-08-29 21:18:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:18:59.691322 | orchestrator | 2025-08-29 21:18:59 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:18:59.691480 | orchestrator | 2025-08-29 21:18:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:02.735238 | orchestrator | 2025-08-29 21:19:02 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:02.735339 | orchestrator | 2025-08-29 21:19:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:05.781672 | orchestrator | 2025-08-29 21:19:05 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:05.781774 | orchestrator | 2025-08-29 21:19:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:08.820129 | orchestrator | 2025-08-29 21:19:08 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:08.820226 | orchestrator | 2025-08-29 21:19:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:11.864789 | orchestrator | 2025-08-29 21:19:11 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:11.864894 | orchestrator | 2025-08-29 21:19:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:14.906483 | orchestrator | 2025-08-29 21:19:14 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:14.906592 | orchestrator | 2025-08-29 21:19:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:17.952083 | orchestrator | 2025-08-29 21:19:17 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:17.952183 | orchestrator | 2025-08-29 21:19:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:20.995305 | orchestrator | 2025-08-29 21:19:20 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:20.995475 | orchestrator | 2025-08-29 21:19:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:24.030631 | orchestrator | 2025-08-29 21:19:24 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:24.030734 | orchestrator | 2025-08-29 21:19:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:27.068988 | orchestrator | 2025-08-29 21:19:27 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:27.069080 | orchestrator | 2025-08-29 21:19:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:30.099960 | orchestrator | 2025-08-29 21:19:30 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:30.100048 | orchestrator | 2025-08-29 21:19:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:33.145564 | orchestrator | 2025-08-29 21:19:33 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:33.145687 | orchestrator | 2025-08-29 21:19:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:36.193798 | orchestrator | 2025-08-29 21:19:36 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:36.193905 | orchestrator | 2025-08-29 21:19:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:39.238275 | orchestrator | 2025-08-29 21:19:39 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:39.238385 | orchestrator | 2025-08-29 21:19:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:42.276044 | orchestrator | 2025-08-29 21:19:42 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:42.276150 | orchestrator | 2025-08-29 21:19:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:45.316613 | orchestrator | 2025-08-29 21:19:45 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:45.316739 | orchestrator | 2025-08-29 21:19:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:48.354103 | orchestrator | 2025-08-29 21:19:48 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:48.354182 | orchestrator | 2025-08-29 21:19:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:51.389931 | orchestrator | 2025-08-29 21:19:51 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:51.390089 | orchestrator | 2025-08-29 21:19:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:54.419657 | orchestrator | 2025-08-29 21:19:54 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:54.419753 | orchestrator | 2025-08-29 21:19:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:19:57.464463 | orchestrator | 2025-08-29 21:19:57 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:19:57.464632 | orchestrator | 2025-08-29 21:19:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:00.502247 | orchestrator | 2025-08-29 21:20:00 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:00.502332 | orchestrator | 2025-08-29 21:20:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:03.546449 | orchestrator | 2025-08-29 21:20:03 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:03.546562 | orchestrator | 2025-08-29 21:20:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:06.581821 | orchestrator | 2025-08-29 21:20:06 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:06.581883 | orchestrator | 2025-08-29 21:20:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:09.624277 | orchestrator | 2025-08-29 21:20:09 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:09.625003 | orchestrator | 2025-08-29 21:20:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:12.662752 | orchestrator | 2025-08-29 21:20:12 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:12.662843 | orchestrator | 2025-08-29 21:20:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:15.692277 | orchestrator | 2025-08-29 21:20:15 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:15.692365 | orchestrator | 2025-08-29 21:20:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:18.717349 | orchestrator | 2025-08-29 21:20:18 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:18.717548 | orchestrator | 2025-08-29 21:20:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:21.746912 | orchestrator | 2025-08-29 21:20:21 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:21.747012 | orchestrator | 2025-08-29 21:20:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:24.784823 | orchestrator | 2025-08-29 21:20:24 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:24.784925 | orchestrator | 2025-08-29 21:20:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:27.828349 | orchestrator | 2025-08-29 21:20:27 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:27.828496 | orchestrator | 2025-08-29 21:20:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:30.872928 | orchestrator | 2025-08-29 21:20:30 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:30.873056 | orchestrator | 2025-08-29 21:20:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:33.916576 | orchestrator | 2025-08-29 21:20:33 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:33.916684 | orchestrator | 2025-08-29 21:20:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:36.974745 | orchestrator | 2025-08-29 21:20:36 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:36.974938 | orchestrator | 2025-08-29 21:20:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:40.022631 | orchestrator | 2025-08-29 21:20:40 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:40.022717 | orchestrator | 2025-08-29 21:20:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:43.070643 | orchestrator | 2025-08-29 21:20:43 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:43.070744 | orchestrator | 2025-08-29 21:20:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:46.119232 | orchestrator | 2025-08-29 21:20:46 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:46.119339 | orchestrator | 2025-08-29 21:20:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:49.164262 | orchestrator | 2025-08-29 21:20:49 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:49.164412 | orchestrator | 2025-08-29 21:20:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:52.199499 | orchestrator | 2025-08-29 21:20:52 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state STARTED 2025-08-29 21:20:52.199611 | orchestrator | 2025-08-29 21:20:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 21:20:55.238424 | orchestrator | 2025-08-29 21:20:55 | INFO  | Task 499f17cc-4df3-496a-b1bb-4ee1fe25aa2b is in state SUCCESS 2025-08-29 21:20:55.240057 | orchestrator | 2025-08-29 21:20:55.240124 | orchestrator | 2025-08-29 21:20:55.240138 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:20:55.240149 | orchestrator | 2025-08-29 21:20:55.240161 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-08-29 21:20:55.240172 | orchestrator | Friday 29 August 2025 21:12:22 +0000 (0:00:00.384) 0:00:00.384 ********* 2025-08-29 21:20:55.240183 | orchestrator | changed: [testbed-manager] 2025-08-29 21:20:55.240194 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.240205 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:20:55.240216 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:20:55.240227 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.240238 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.240248 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.240259 | orchestrator | 2025-08-29 21:20:55.240270 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:20:55.240281 | orchestrator | Friday 29 August 2025 21:12:23 +0000 (0:00:01.000) 0:00:01.384 ********* 2025-08-29 21:20:55.240292 | orchestrator | changed: [testbed-manager] 2025-08-29 21:20:55.240303 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.240314 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:20:55.240324 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:20:55.240335 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.240346 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.240398 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.240411 | orchestrator | 2025-08-29 21:20:55.240423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:20:55.240554 | orchestrator | Friday 29 August 2025 21:12:23 +0000 (0:00:00.579) 0:00:01.964 ********* 2025-08-29 21:20:55.240570 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-08-29 21:20:55.240605 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 21:20:55.240617 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 21:20:55.240629 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 21:20:55.240642 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-08-29 21:20:55.240654 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-08-29 21:20:55.240671 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-08-29 21:20:55.240691 | orchestrator | 2025-08-29 21:20:55.240709 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-08-29 21:20:55.240728 | orchestrator | 2025-08-29 21:20:55.240747 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 21:20:55.240766 | orchestrator | Friday 29 August 2025 21:12:25 +0000 (0:00:01.174) 0:00:03.138 ********* 2025-08-29 21:20:55.240785 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:20:55.240805 | orchestrator | 2025-08-29 21:20:55.240826 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-08-29 21:20:55.240850 | orchestrator | Friday 29 August 2025 21:12:25 +0000 (0:00:00.570) 0:00:03.709 ********* 2025-08-29 21:20:55.240874 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-08-29 21:20:55.240896 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-08-29 21:20:55.240921 | orchestrator | 2025-08-29 21:20:55.240943 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-08-29 21:20:55.240965 | orchestrator | Friday 29 August 2025 21:12:29 +0000 (0:00:04.108) 0:00:07.817 ********* 2025-08-29 21:20:55.240984 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 21:20:55.241005 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 21:20:55.241022 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.241040 | orchestrator | 2025-08-29 21:20:55.241058 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 21:20:55.241077 | orchestrator | Friday 29 August 2025 21:12:34 +0000 (0:00:04.493) 0:00:12.311 ********* 2025-08-29 21:20:55.241096 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.241116 | orchestrator | 2025-08-29 21:20:55.241135 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-08-29 21:20:55.241154 | orchestrator | Friday 29 August 2025 21:12:35 +0000 (0:00:00.782) 0:00:13.093 ********* 2025-08-29 21:20:55.241174 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.241245 | orchestrator | 2025-08-29 21:20:55.241266 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-08-29 21:20:55.241286 | orchestrator | Friday 29 August 2025 21:12:36 +0000 (0:00:01.848) 0:00:14.942 ********* 2025-08-29 21:20:55.241387 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.241399 | orchestrator | 2025-08-29 21:20:55.241410 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 21:20:55.241434 | orchestrator | Friday 29 August 2025 21:12:39 +0000 (0:00:02.657) 0:00:17.599 ********* 2025-08-29 21:20:55.241445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.241456 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.241467 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.241478 | orchestrator | 2025-08-29 21:20:55.241489 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 21:20:55.241500 | orchestrator | Friday 29 August 2025 21:12:39 +0000 (0:00:00.273) 0:00:17.872 ********* 2025-08-29 21:20:55.241511 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:20:55.241522 | orchestrator | 2025-08-29 21:20:55.241534 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-08-29 21:20:55.241545 | orchestrator | Friday 29 August 2025 21:13:15 +0000 (0:00:35.688) 0:00:53.561 ********* 2025-08-29 21:20:55.241555 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.241566 | orchestrator | 2025-08-29 21:20:55.241577 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 21:20:55.241600 | orchestrator | Friday 29 August 2025 21:13:29 +0000 (0:00:13.918) 0:01:07.479 ********* 2025-08-29 21:20:55.241611 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:20:55.241621 | orchestrator | 2025-08-29 21:20:55.241632 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 21:20:55.241643 | orchestrator | Friday 29 August 2025 21:13:40 +0000 (0:00:11.466) 0:01:18.945 ********* 2025-08-29 21:20:55.241668 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:20:55.241680 | orchestrator | 2025-08-29 21:20:55.241691 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-08-29 21:20:55.241702 | orchestrator | Friday 29 August 2025 21:13:41 +0000 (0:00:00.848) 0:01:19.794 ********* 2025-08-29 21:20:55.241713 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.241723 | orchestrator | 2025-08-29 21:20:55.241734 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 21:20:55.241745 | orchestrator | Friday 29 August 2025 21:13:42 +0000 (0:00:00.460) 0:01:20.255 ********* 2025-08-29 21:20:55.241756 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:20:55.241767 | orchestrator | 2025-08-29 21:20:55.241777 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 21:20:55.241788 | orchestrator | Friday 29 August 2025 21:13:42 +0000 (0:00:00.428) 0:01:20.684 ********* 2025-08-29 21:20:55.241799 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:20:55.241810 | orchestrator | 2025-08-29 21:20:55.241821 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 21:20:55.241831 | orchestrator | Friday 29 August 2025 21:14:00 +0000 (0:00:17.645) 0:01:38.329 ********* 2025-08-29 21:20:55.241842 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.241853 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.241863 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.241874 | orchestrator | 2025-08-29 21:20:55.241885 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-08-29 21:20:55.241896 | orchestrator | 2025-08-29 21:20:55.241906 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 21:20:55.241917 | orchestrator | Friday 29 August 2025 21:14:00 +0000 (0:00:00.353) 0:01:38.682 ********* 2025-08-29 21:20:55.241928 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:20:55.241939 | orchestrator | 2025-08-29 21:20:55.241950 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-08-29 21:20:55.241961 | orchestrator | Friday 29 August 2025 21:14:01 +0000 (0:00:00.583) 0:01:39.266 ********* 2025-08-29 21:20:55.241971 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.241982 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.241993 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.242004 | orchestrator | 2025-08-29 21:20:55.242061 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-08-29 21:20:55.242077 | orchestrator | Friday 29 August 2025 21:14:03 +0000 (0:00:02.243) 0:01:41.510 ********* 2025-08-29 21:20:55.242088 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242099 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242110 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.242121 | orchestrator | 2025-08-29 21:20:55.242132 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 21:20:55.242143 | orchestrator | Friday 29 August 2025 21:14:05 +0000 (0:00:02.208) 0:01:43.719 ********* 2025-08-29 21:20:55.242154 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.242165 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242175 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242186 | orchestrator | 2025-08-29 21:20:55.242197 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 21:20:55.242208 | orchestrator | Friday 29 August 2025 21:14:06 +0000 (0:00:00.353) 0:01:44.073 ********* 2025-08-29 21:20:55.242229 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 21:20:55.242240 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242251 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 21:20:55.242261 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242272 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 21:20:55.242283 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-08-29 21:20:55.242294 | orchestrator | 2025-08-29 21:20:55.242305 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 21:20:55.242316 | orchestrator | Friday 29 August 2025 21:14:14 +0000 (0:00:08.470) 0:01:52.543 ********* 2025-08-29 21:20:55.242326 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.242337 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242348 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242377 | orchestrator | 2025-08-29 21:20:55.242389 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 21:20:55.242399 | orchestrator | Friday 29 August 2025 21:14:14 +0000 (0:00:00.289) 0:01:52.833 ********* 2025-08-29 21:20:55.242410 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 21:20:55.242421 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.242431 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 21:20:55.242447 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242458 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 21:20:55.242469 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242480 | orchestrator | 2025-08-29 21:20:55.242490 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 21:20:55.242501 | orchestrator | Friday 29 August 2025 21:14:15 +0000 (0:00:00.572) 0:01:53.406 ********* 2025-08-29 21:20:55.242512 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242523 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242533 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.242544 | orchestrator | 2025-08-29 21:20:55.242555 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-08-29 21:20:55.242566 | orchestrator | Friday 29 August 2025 21:14:15 +0000 (0:00:00.530) 0:01:53.936 ********* 2025-08-29 21:20:55.242577 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242592 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242611 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.242628 | orchestrator | 2025-08-29 21:20:55.242646 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-08-29 21:20:55.242657 | orchestrator | Friday 29 August 2025 21:14:17 +0000 (0:00:01.142) 0:01:55.078 ********* 2025-08-29 21:20:55.242668 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242679 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242707 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.242718 | orchestrator | 2025-08-29 21:20:55.242729 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-08-29 21:20:55.242740 | orchestrator | Friday 29 August 2025 21:14:19 +0000 (0:00:01.908) 0:01:56.987 ********* 2025-08-29 21:20:55.242750 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242761 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242771 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:20:55.242782 | orchestrator | 2025-08-29 21:20:55.242793 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 21:20:55.242803 | orchestrator | Friday 29 August 2025 21:14:40 +0000 (0:00:21.484) 0:02:18.472 ********* 2025-08-29 21:20:55.242814 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242824 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242835 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:20:55.242846 | orchestrator | 2025-08-29 21:20:55.242856 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 21:20:55.242867 | orchestrator | Friday 29 August 2025 21:14:52 +0000 (0:00:12.120) 0:02:30.593 ********* 2025-08-29 21:20:55.242885 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:20:55.242896 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242907 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242917 | orchestrator | 2025-08-29 21:20:55.242928 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-08-29 21:20:55.242939 | orchestrator | Friday 29 August 2025 21:14:53 +0000 (0:00:00.833) 0:02:31.426 ********* 2025-08-29 21:20:55.242949 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.242960 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.242993 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.243004 | orchestrator | 2025-08-29 21:20:55.243015 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-08-29 21:20:55.243026 | orchestrator | Friday 29 August 2025 21:15:05 +0000 (0:00:11.764) 0:02:43.190 ********* 2025-08-29 21:20:55.243037 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.243047 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.243058 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.243068 | orchestrator | 2025-08-29 21:20:55.243079 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 21:20:55.243090 | orchestrator | Friday 29 August 2025 21:15:06 +0000 (0:00:01.199) 0:02:44.389 ********* 2025-08-29 21:20:55.243100 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.243111 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.243121 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.243132 | orchestrator | 2025-08-29 21:20:55.243143 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-08-29 21:20:55.243153 | orchestrator | 2025-08-29 21:20:55.243164 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 21:20:55.243174 | orchestrator | Friday 29 August 2025 21:15:06 +0000 (0:00:00.284) 0:02:44.674 ********* 2025-08-29 21:20:55.243185 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:20:55.243197 | orchestrator | 2025-08-29 21:20:55.243208 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-08-29 21:20:55.243218 | orchestrator | Friday 29 August 2025 21:15:07 +0000 (0:00:00.471) 0:02:45.145 ********* 2025-08-29 21:20:55.243229 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-08-29 21:20:55.243239 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-08-29 21:20:55.243250 | orchestrator | 2025-08-29 21:20:55.243261 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-08-29 21:20:55.243271 | orchestrator | Friday 29 August 2025 21:15:10 +0000 (0:00:03.194) 0:02:48.339 ********* 2025-08-29 21:20:55.243282 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-08-29 21:20:55.243294 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-08-29 21:20:55.243305 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-08-29 21:20:55.243316 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-08-29 21:20:55.243327 | orchestrator | 2025-08-29 21:20:55.243337 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-08-29 21:20:55.243348 | orchestrator | Friday 29 August 2025 21:15:17 +0000 (0:00:07.109) 0:02:55.449 ********* 2025-08-29 21:20:55.243410 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 21:20:55.243422 | orchestrator | 2025-08-29 21:20:55.243438 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-08-29 21:20:55.243449 | orchestrator | Friday 29 August 2025 21:15:20 +0000 (0:00:03.451) 0:02:58.900 ********* 2025-08-29 21:20:55.243460 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 21:20:55.243477 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-08-29 21:20:55.243488 | orchestrator | 2025-08-29 21:20:55.243499 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-08-29 21:20:55.243510 | orchestrator | Friday 29 August 2025 21:15:24 +0000 (0:00:03.935) 0:03:02.836 ********* 2025-08-29 21:20:55.243520 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 21:20:55.243531 | orchestrator | 2025-08-29 21:20:55.243541 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-08-29 21:20:55.243552 | orchestrator | Friday 29 August 2025 21:15:28 +0000 (0:00:03.316) 0:03:06.152 ********* 2025-08-29 21:20:55.243563 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-08-29 21:20:55.243574 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-08-29 21:20:55.243584 | orchestrator | 2025-08-29 21:20:55.243595 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 21:20:55.243701 | orchestrator | Friday 29 August 2025 21:15:35 +0000 (0:00:07.626) 0:03:13.779 ********* 2025-08-29 21:20:55.243718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.243736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.243754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.243782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.243795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.243807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.243818 | orchestrator | 2025-08-29 21:20:55.243830 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-08-29 21:20:55.243841 | orchestrator | Friday 29 August 2025 21:15:37 +0000 (0:00:01.462) 0:03:15.242 ********* 2025-08-29 21:20:55.243852 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.243863 | orchestrator | 2025-08-29 21:20:55.243873 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-08-29 21:20:55.243884 | orchestrator | Friday 29 August 2025 21:15:37 +0000 (0:00:00.110) 0:03:15.352 ********* 2025-08-29 21:20:55.243895 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.243906 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.243915 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.243925 | orchestrator | 2025-08-29 21:20:55.243935 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-08-29 21:20:55.243945 | orchestrator | Friday 29 August 2025 21:15:37 +0000 (0:00:00.474) 0:03:15.826 ********* 2025-08-29 21:20:55.243954 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 21:20:55.243964 | orchestrator | 2025-08-29 21:20:55.243974 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-08-29 21:20:55.243983 | orchestrator | Friday 29 August 2025 21:15:38 +0000 (0:00:00.685) 0:03:16.511 ********* 2025-08-29 21:20:55.243993 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.244008 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.244018 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.244027 | orchestrator | 2025-08-29 21:20:55.244037 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 21:20:55.244046 | orchestrator | Friday 29 August 2025 21:15:38 +0000 (0:00:00.308) 0:03:16.820 ********* 2025-08-29 21:20:55.244056 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:20:55.244066 | orchestrator | 2025-08-29 21:20:55.244075 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 21:20:55.244085 | orchestrator | Friday 29 August 2025 21:15:39 +0000 (0:00:00.517) 0:03:17.337 ********* 2025-08-29 21:20:55.244106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.244118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.244130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.244147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.244166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.244193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.244211 | orchestrator | 2025-08-29 21:20:55.244226 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 21:20:55.244242 | orchestrator | Friday 29 August 2025 21:15:42 +0000 (0:00:02.781) 0:03:20.119 ********* 2025-08-29 21:20:55.244259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:20:55.244288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.244326 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.244375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:20:55.244401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.244411 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.244422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:20:55.244433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.244449 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.244459 | orchestrator | 2025-08-29 21:20:55.244469 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 21:20:55.244478 | orchestrator | Friday 29 August 2025 21:15:42 +0000 (0:00:00.587) 0:03:20.706 ********* 2025-08-29 21:20:55.244493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:20:55.244504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.244514 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.244532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:20:55.244543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.244559 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.244569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:20:55.244593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.244603 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.244613 | orchestrator | 2025-08-29 21:20:55.244623 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-08-29 21:20:55.244632 | orchestrator | Friday 29 August 2025 21:15:43 +0000 (0:00:00.770) 0:03:21.476 ********* 2025-08-29 21:20:55.244650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.244661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.244690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.244716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.244737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.244762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.244790 | orchestrator | 2025-08-29 21:20:55.244808 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-08-29 21:20:55.244824 | orchestrator | Friday 29 August 2025 21:15:46 +0000 (0:00:02.803) 0:03:24.280 ********* 2025-08-29 21:20:55.244835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.244852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.244872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.244890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.244901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.244911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.244921 | orchestrator | 2025-08-29 21:20:55.244931 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-08-29 21:20:55.244941 | orchestrator | Friday 29 August 2025 21:15:51 +0000 (0:00:05.588) 0:03:29.869 ********* 2025-08-29 21:20:55.244962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:20:55.244974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.244990 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.245001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:20:55.245012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.245022 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.245036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 21:20:55.245054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.245070 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.245080 | orchestrator | 2025-08-29 21:20:55.245090 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-08-29 21:20:55.245100 | orchestrator | Friday 29 August 2025 21:15:52 +0000 (0:00:00.582) 0:03:30.451 ********* 2025-08-29 21:20:55.245110 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.245119 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:20:55.245129 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:20:55.245138 | orchestrator | 2025-08-29 21:20:55.245148 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-08-29 21:20:55.245157 | orchestrator | Friday 29 August 2025 21:15:54 +0000 (0:00:01.728) 0:03:32.180 ********* 2025-08-29 21:20:55.245167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.245177 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.245189 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.245206 | orchestrator | 2025-08-29 21:20:55.245222 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-08-29 21:20:55.245237 | orchestrator | Friday 29 August 2025 21:15:54 +0000 (0:00:00.399) 0:03:32.579 ********* 2025-08-29 21:20:55.245264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.245296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.245329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 21:20:55.245383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.245409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.245428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.245446 | orchestrator | 2025-08-29 21:20:55.245464 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 21:20:55.245482 | orchestrator | Friday 29 August 2025 21:15:56 +0000 (0:00:01.904) 0:03:34.484 ********* 2025-08-29 21:20:55.245499 | orchestrator | 2025-08-29 21:20:55.245511 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 21:20:55.245520 | orchestrator | Friday 29 August 2025 21:15:56 +0000 (0:00:00.117) 0:03:34.602 ********* 2025-08-29 21:20:55.245530 | orchestrator | 2025-08-29 21:20:55.245540 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 21:20:55.245549 | orchestrator | Friday 29 August 2025 21:15:56 +0000 (0:00:00.117) 0:03:34.719 ********* 2025-08-29 21:20:55.245559 | orchestrator | 2025-08-29 21:20:55.245568 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-08-29 21:20:55.245584 | orchestrator | Friday 29 August 2025 21:15:56 +0000 (0:00:00.123) 0:03:34.842 ********* 2025-08-29 21:20:55.245594 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.245603 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:20:55.245613 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:20:55.245622 | orchestrator | 2025-08-29 21:20:55.245632 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-08-29 21:20:55.245651 | orchestrator | Friday 29 August 2025 21:16:17 +0000 (0:00:20.691) 0:03:55.534 ********* 2025-08-29 21:20:55.245661 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.245670 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:20:55.245680 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:20:55.245689 | orchestrator | 2025-08-29 21:20:55.245699 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-08-29 21:20:55.245709 | orchestrator | 2025-08-29 21:20:55.245718 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 21:20:55.245728 | orchestrator | Friday 29 August 2025 21:16:23 +0000 (0:00:05.615) 0:04:01.150 ********* 2025-08-29 21:20:55.245738 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:20:55.245748 | orchestrator | 2025-08-29 21:20:55.245765 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 21:20:55.245776 | orchestrator | Friday 29 August 2025 21:16:24 +0000 (0:00:01.147) 0:04:02.297 ********* 2025-08-29 21:20:55.245785 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.245795 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.245804 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.245813 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.245823 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.245832 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.245842 | orchestrator | 2025-08-29 21:20:55.245851 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-08-29 21:20:55.245861 | orchestrator | Friday 29 August 2025 21:16:25 +0000 (0:00:00.725) 0:04:03.023 ********* 2025-08-29 21:20:55.245870 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.245880 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.245889 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.245898 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:20:55.245908 | orchestrator | 2025-08-29 21:20:55.245917 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 21:20:55.245927 | orchestrator | Friday 29 August 2025 21:16:25 +0000 (0:00:00.770) 0:04:03.793 ********* 2025-08-29 21:20:55.245936 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-08-29 21:20:55.245946 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-08-29 21:20:55.245955 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-08-29 21:20:55.245965 | orchestrator | 2025-08-29 21:20:55.245975 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 21:20:55.245984 | orchestrator | Friday 29 August 2025 21:16:26 +0000 (0:00:00.894) 0:04:04.687 ********* 2025-08-29 21:20:55.245994 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-08-29 21:20:55.246003 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-08-29 21:20:55.246013 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-08-29 21:20:55.246053 | orchestrator | 2025-08-29 21:20:55.246062 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 21:20:55.246072 | orchestrator | Friday 29 August 2025 21:16:27 +0000 (0:00:01.214) 0:04:05.901 ********* 2025-08-29 21:20:55.246082 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-08-29 21:20:55.246091 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.246101 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-08-29 21:20:55.246111 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.246120 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-08-29 21:20:55.246130 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.246139 | orchestrator | 2025-08-29 21:20:55.246149 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-08-29 21:20:55.246158 | orchestrator | Friday 29 August 2025 21:16:28 +0000 (0:00:00.542) 0:04:06.443 ********* 2025-08-29 21:20:55.246175 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 21:20:55.246185 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 21:20:55.246194 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.246204 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 21:20:55.246214 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 21:20:55.246223 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 21:20:55.246233 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 21:20:55.246242 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.246252 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 21:20:55.246261 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 21:20:55.246271 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.246280 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 21:20:55.246290 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 21:20:55.246299 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 21:20:55.246311 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 21:20:55.246328 | orchestrator | 2025-08-29 21:20:55.246344 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-08-29 21:20:55.246385 | orchestrator | Friday 29 August 2025 21:16:30 +0000 (0:00:02.375) 0:04:08.819 ********* 2025-08-29 21:20:55.246407 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.246429 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.246444 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.246459 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.246473 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.246489 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.246502 | orchestrator | 2025-08-29 21:20:55.246517 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-08-29 21:20:55.246533 | orchestrator | Friday 29 August 2025 21:16:32 +0000 (0:00:01.225) 0:04:10.045 ********* 2025-08-29 21:20:55.246548 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.246564 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.246578 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.246594 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.246608 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.246623 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.246638 | orchestrator | 2025-08-29 21:20:55.246653 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 21:20:55.246668 | orchestrator | Friday 29 August 2025 21:16:34 +0000 (0:00:01.945) 0:04:11.990 ********* 2025-08-29 21:20:55.246699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.246731 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.246749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.246774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.246797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.246831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.246849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.246879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.246890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.246900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.246916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.247562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.247600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.247634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.247653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.247670 | orchestrator | 2025-08-29 21:20:55.247687 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 21:20:55.247701 | orchestrator | Friday 29 August 2025 21:16:36 +0000 (0:00:02.392) 0:04:14.383 ********* 2025-08-29 21:20:55.247711 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:20:55.247721 | orchestrator | 2025-08-29 21:20:55.247731 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 21:20:55.247741 | orchestrator | Friday 29 August 2025 21:16:37 +0000 (0:00:01.130) 0:04:15.513 ********* 2025-08-29 21:20:55.247758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.247801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.247820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.247829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.247837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.247845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.247857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.248005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.248025 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.248034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.248043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.248051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.248067 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.248101 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.248117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.248126 | orchestrator | 2025-08-29 21:20:55.248134 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 21:20:55.248142 | orchestrator | Friday 29 August 2025 21:16:41 +0000 (0:00:03.874) 0:04:19.388 ********* 2025-08-29 21:20:55.248150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.248159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.248171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.248217 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.248226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.248234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248242 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.248250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.248259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.248270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248284 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.248314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 21:20:55.248323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248331 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.248340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 21:20:55.248348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248380 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.248395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 21:20:55.248408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248422 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.248431 | orchestrator | 2025-08-29 21:20:55.248439 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 21:20:55.248447 | orchestrator | Friday 29 August 2025 21:16:42 +0000 (0:00:01.311) 0:04:20.700 ********* 2025-08-29 21:20:55.248480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.248490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.248498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.248507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.248532 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.248562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248572 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.248580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.248588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.248596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248605 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.248616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 21:20:55.248630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248639 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.248669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 21:20:55.248679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248688 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.248698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 21:20:55.248708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.248717 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.248726 | orchestrator | 2025-08-29 21:20:55.248734 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 21:20:55.248749 | orchestrator | Friday 29 August 2025 21:16:44 +0000 (0:00:01.594) 0:04:22.294 ********* 2025-08-29 21:20:55.248758 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.248767 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.248776 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.248785 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 21:20:55.248794 | orchestrator | 2025-08-29 21:20:55.248803 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-08-29 21:20:55.248811 | orchestrator | Friday 29 August 2025 21:16:45 +0000 (0:00:00.848) 0:04:23.142 ********* 2025-08-29 21:20:55.248820 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 21:20:55.248829 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 21:20:55.248838 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 21:20:55.248847 | orchestrator | 2025-08-29 21:20:55.248856 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-08-29 21:20:55.248868 | orchestrator | Friday 29 August 2025 21:16:45 +0000 (0:00:00.785) 0:04:23.928 ********* 2025-08-29 21:20:55.248877 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 21:20:55.248885 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 21:20:55.248894 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 21:20:55.248903 | orchestrator | 2025-08-29 21:20:55.248911 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-08-29 21:20:55.248920 | orchestrator | Friday 29 August 2025 21:16:46 +0000 (0:00:00.882) 0:04:24.811 ********* 2025-08-29 21:20:55.248929 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:20:55.248938 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:20:55.248946 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:20:55.248956 | orchestrator | 2025-08-29 21:20:55.248964 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-08-29 21:20:55.248973 | orchestrator | Friday 29 August 2025 21:16:47 +0000 (0:00:00.470) 0:04:25.282 ********* 2025-08-29 21:20:55.248982 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:20:55.248991 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:20:55.249000 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:20:55.249008 | orchestrator | 2025-08-29 21:20:55.249016 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-08-29 21:20:55.249024 | orchestrator | Friday 29 August 2025 21:16:47 +0000 (0:00:00.462) 0:04:25.744 ********* 2025-08-29 21:20:55.249032 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 21:20:55.249062 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 21:20:55.249071 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 21:20:55.249079 | orchestrator | 2025-08-29 21:20:55.249087 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-08-29 21:20:55.249095 | orchestrator | Friday 29 August 2025 21:16:48 +0000 (0:00:01.221) 0:04:26.966 ********* 2025-08-29 21:20:55.249103 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 21:20:55.249111 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 21:20:55.249119 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 21:20:55.249126 | orchestrator | 2025-08-29 21:20:55.249134 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-08-29 21:20:55.249142 | orchestrator | Friday 29 August 2025 21:16:50 +0000 (0:00:01.302) 0:04:28.269 ********* 2025-08-29 21:20:55.249150 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 21:20:55.249158 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 21:20:55.249166 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 21:20:55.249174 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-08-29 21:20:55.249181 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-08-29 21:20:55.249189 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-08-29 21:20:55.249202 | orchestrator | 2025-08-29 21:20:55.249210 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-08-29 21:20:55.249218 | orchestrator | Friday 29 August 2025 21:16:53 +0000 (0:00:03.588) 0:04:31.858 ********* 2025-08-29 21:20:55.249226 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.249234 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.249241 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.249249 | orchestrator | 2025-08-29 21:20:55.249257 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-08-29 21:20:55.249265 | orchestrator | Friday 29 August 2025 21:16:54 +0000 (0:00:00.317) 0:04:32.176 ********* 2025-08-29 21:20:55.249273 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.249281 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.249289 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.249296 | orchestrator | 2025-08-29 21:20:55.249304 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-08-29 21:20:55.249312 | orchestrator | Friday 29 August 2025 21:16:54 +0000 (0:00:00.280) 0:04:32.457 ********* 2025-08-29 21:20:55.249320 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.249328 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.249336 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.249344 | orchestrator | 2025-08-29 21:20:55.249365 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-08-29 21:20:55.249375 | orchestrator | Friday 29 August 2025 21:16:56 +0000 (0:00:01.722) 0:04:34.179 ********* 2025-08-29 21:20:55.249383 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 21:20:55.249391 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 21:20:55.249399 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 21:20:55.249407 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 21:20:55.249415 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 21:20:55.249423 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 21:20:55.249431 | orchestrator | 2025-08-29 21:20:55.249439 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-08-29 21:20:55.249446 | orchestrator | Friday 29 August 2025 21:16:59 +0000 (0:00:03.251) 0:04:37.431 ********* 2025-08-29 21:20:55.249454 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 21:20:55.249462 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 21:20:55.249470 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 21:20:55.249478 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 21:20:55.249494 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.249508 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 21:20:55.249522 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.249535 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 21:20:55.249550 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.249569 | orchestrator | 2025-08-29 21:20:55.249582 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-08-29 21:20:55.249596 | orchestrator | Friday 29 August 2025 21:17:02 +0000 (0:00:03.320) 0:04:40.752 ********* 2025-08-29 21:20:55.249609 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.249617 | orchestrator | 2025-08-29 21:20:55.249624 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-08-29 21:20:55.249632 | orchestrator | Friday 29 August 2025 21:17:02 +0000 (0:00:00.123) 0:04:40.875 ********* 2025-08-29 21:20:55.249647 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.249655 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.249663 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.249671 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.249678 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.249686 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.249694 | orchestrator | 2025-08-29 21:20:55.249702 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-08-29 21:20:55.249742 | orchestrator | Friday 29 August 2025 21:17:03 +0000 (0:00:00.712) 0:04:41.587 ********* 2025-08-29 21:20:55.249751 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 21:20:55.249759 | orchestrator | 2025-08-29 21:20:55.249767 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-08-29 21:20:55.249775 | orchestrator | Friday 29 August 2025 21:17:04 +0000 (0:00:00.679) 0:04:42.267 ********* 2025-08-29 21:20:55.249783 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.249791 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.249799 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.249807 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.249814 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.249822 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.249830 | orchestrator | 2025-08-29 21:20:55.249838 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-08-29 21:20:55.249846 | orchestrator | Friday 29 August 2025 21:17:04 +0000 (0:00:00.596) 0:04:42.863 ********* 2025-08-29 21:20:55.249855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249877 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.249994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.250003 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.250038 | orchestrator | 2025-08-29 21:20:55.250049 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-08-29 21:20:55.250057 | orchestrator | Friday 29 August 2025 21:17:09 +0000 (0:00:04.126) 0:04:46.990 ********* 2025-08-29 21:20:55.250068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.250083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.250091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.250100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.250108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.250124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.250137 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.250146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.250154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.250163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.250176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.250188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.250201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.250210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.250218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.250226 | orchestrator | 2025-08-29 21:20:55.250234 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-08-29 21:20:55.250243 | orchestrator | Friday 29 August 2025 21:17:14 +0000 (0:00:05.846) 0:04:52.836 ********* 2025-08-29 21:20:55.250251 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.250259 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.250267 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.250275 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.250283 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.250290 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.250298 | orchestrator | 2025-08-29 21:20:55.250306 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-08-29 21:20:55.250319 | orchestrator | Friday 29 August 2025 21:17:16 +0000 (0:00:01.413) 0:04:54.249 ********* 2025-08-29 21:20:55.250327 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 21:20:55.250335 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 21:20:55.250343 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 21:20:55.250392 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 21:20:55.250403 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 21:20:55.250411 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 21:20:55.250419 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 21:20:55.250427 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.250435 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 21:20:55.250443 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.250451 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 21:20:55.250459 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.250467 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 21:20:55.250475 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 21:20:55.250487 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 21:20:55.250495 | orchestrator | 2025-08-29 21:20:55.250503 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-08-29 21:20:55.250511 | orchestrator | Friday 29 August 2025 21:17:19 +0000 (0:00:03.557) 0:04:57.807 ********* 2025-08-29 21:20:55.250519 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.250526 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.250534 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.250542 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.250550 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.250558 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.250565 | orchestrator | 2025-08-29 21:20:55.250573 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-08-29 21:20:55.250581 | orchestrator | Friday 29 August 2025 21:17:20 +0000 (0:00:00.779) 0:04:58.587 ********* 2025-08-29 21:20:55.250589 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 21:20:55.250597 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 21:20:55.250611 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 21:20:55.250625 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 21:20:55.250639 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 21:20:55.250652 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 21:20:55.250665 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 21:20:55.250678 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 21:20:55.250691 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 21:20:55.250710 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.250723 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 21:20:55.250734 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 21:20:55.250743 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.250750 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 21:20:55.250756 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.250763 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 21:20:55.250770 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 21:20:55.250776 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 21:20:55.250783 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 21:20:55.250789 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 21:20:55.250796 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 21:20:55.250803 | orchestrator | 2025-08-29 21:20:55.250809 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-08-29 21:20:55.250816 | orchestrator | Friday 29 August 2025 21:17:25 +0000 (0:00:05.190) 0:05:03.777 ********* 2025-08-29 21:20:55.250822 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 21:20:55.250829 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 21:20:55.250835 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 21:20:55.250842 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 21:20:55.250848 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 21:20:55.250855 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 21:20:55.250861 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 21:20:55.250867 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 21:20:55.250874 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 21:20:55.250880 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 21:20:55.250887 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 21:20:55.250899 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 21:20:55.250906 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 21:20:55.250912 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.250919 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 21:20:55.250925 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.250932 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 21:20:55.250938 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.250944 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 21:20:55.250951 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 21:20:55.250957 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 21:20:55.250969 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 21:20:55.250976 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 21:20:55.250986 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 21:20:55.250993 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 21:20:55.251000 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 21:20:55.251006 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 21:20:55.251013 | orchestrator | 2025-08-29 21:20:55.251019 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-08-29 21:20:55.251026 | orchestrator | Friday 29 August 2025 21:17:32 +0000 (0:00:07.109) 0:05:10.886 ********* 2025-08-29 21:20:55.251032 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.251039 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.251046 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.251052 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.251059 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.251065 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.251072 | orchestrator | 2025-08-29 21:20:55.251078 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-08-29 21:20:55.251085 | orchestrator | Friday 29 August 2025 21:17:33 +0000 (0:00:00.539) 0:05:11.426 ********* 2025-08-29 21:20:55.251091 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.251098 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.251104 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.251111 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.251117 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.251124 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.251130 | orchestrator | 2025-08-29 21:20:55.251137 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-08-29 21:20:55.251143 | orchestrator | Friday 29 August 2025 21:17:34 +0000 (0:00:00.619) 0:05:12.045 ********* 2025-08-29 21:20:55.251150 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.251156 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.251163 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.251169 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.251176 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.251182 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.251189 | orchestrator | 2025-08-29 21:20:55.251195 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-08-29 21:20:55.251202 | orchestrator | Friday 29 August 2025 21:17:35 +0000 (0:00:01.766) 0:05:13.812 ********* 2025-08-29 21:20:55.251209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.251216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.251230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.251237 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.251248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.251255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.251263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.251270 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.251277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 21:20:55.251292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 21:20:55.251303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.251311 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.251318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 21:20:55.251325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.251331 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.251338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 21:20:55.251349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.251370 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.251380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 21:20:55.251391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 21:20:55.251398 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.251405 | orchestrator | 2025-08-29 21:20:55.251412 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-08-29 21:20:55.251418 | orchestrator | Friday 29 August 2025 21:17:37 +0000 (0:00:01.632) 0:05:15.444 ********* 2025-08-29 21:20:55.251425 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 21:20:55.251432 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 21:20:55.251438 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.251445 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 21:20:55.251451 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 21:20:55.251458 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.251465 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 21:20:55.251471 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 21:20:55.251478 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.251484 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 21:20:55.251491 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 21:20:55.251497 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.251504 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 21:20:55.251510 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 21:20:55.251517 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.251523 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 21:20:55.251530 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 21:20:55.251536 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.251543 | orchestrator | 2025-08-29 21:20:55.251549 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-08-29 21:20:55.251556 | orchestrator | Friday 29 August 2025 21:17:38 +0000 (0:00:00.663) 0:05:16.108 ********* 2025-08-29 21:20:55.251575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 21:20:55.251705 | orchestrator | 2025-08-29 21:20:55.251712 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 21:20:55.251718 | orchestrator | Friday 29 August 2025 21:17:41 +0000 (0:00:02.961) 0:05:19.069 ********* 2025-08-29 21:20:55.251725 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.251732 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.251744 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.251760 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.251771 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.251782 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.251793 | orchestrator | 2025-08-29 21:20:55.251804 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 21:20:55.251816 | orchestrator | Friday 29 August 2025 21:17:41 +0000 (0:00:00.540) 0:05:19.610 ********* 2025-08-29 21:20:55.251823 | orchestrator | 2025-08-29 21:20:55.251829 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 21:20:55.251836 | orchestrator | Friday 29 August 2025 21:17:41 +0000 (0:00:00.124) 0:05:19.734 ********* 2025-08-29 21:20:55.251843 | orchestrator | 2025-08-29 21:20:55.251849 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 21:20:55.251861 | orchestrator | Friday 29 August 2025 21:17:42 +0000 (0:00:00.277) 0:05:20.011 ********* 2025-08-29 21:20:55.251868 | orchestrator | 2025-08-29 21:20:55.251874 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 21:20:55.251881 | orchestrator | Friday 29 August 2025 21:17:42 +0000 (0:00:00.127) 0:05:20.139 ********* 2025-08-29 21:20:55.251887 | orchestrator | 2025-08-29 21:20:55.251894 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 21:20:55.251900 | orchestrator | Friday 29 August 2025 21:17:42 +0000 (0:00:00.123) 0:05:20.262 ********* 2025-08-29 21:20:55.251907 | orchestrator | 2025-08-29 21:20:55.251913 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 21:20:55.251920 | orchestrator | Friday 29 August 2025 21:17:42 +0000 (0:00:00.119) 0:05:20.382 ********* 2025-08-29 21:20:55.251927 | orchestrator | 2025-08-29 21:20:55.251933 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-08-29 21:20:55.251940 | orchestrator | Friday 29 August 2025 21:17:42 +0000 (0:00:00.120) 0:05:20.503 ********* 2025-08-29 21:20:55.251946 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.251953 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:20:55.251960 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:20:55.251966 | orchestrator | 2025-08-29 21:20:55.251973 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-08-29 21:20:55.251980 | orchestrator | Friday 29 August 2025 21:17:50 +0000 (0:00:07.681) 0:05:28.185 ********* 2025-08-29 21:20:55.251986 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.251993 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:20:55.251999 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:20:55.252006 | orchestrator | 2025-08-29 21:20:55.252012 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-08-29 21:20:55.252019 | orchestrator | Friday 29 August 2025 21:18:07 +0000 (0:00:17.218) 0:05:45.403 ********* 2025-08-29 21:20:55.252025 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.252032 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.252038 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.252045 | orchestrator | 2025-08-29 21:20:55.252052 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-08-29 21:20:55.252058 | orchestrator | Friday 29 August 2025 21:18:33 +0000 (0:00:26.316) 0:06:11.720 ********* 2025-08-29 21:20:55.252065 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.252071 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.252078 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.252084 | orchestrator | 2025-08-29 21:20:55.252091 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-08-29 21:20:55.252097 | orchestrator | Friday 29 August 2025 21:19:15 +0000 (0:00:41.795) 0:06:53.515 ********* 2025-08-29 21:20:55.252104 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-08-29 21:20:55.252110 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-08-29 21:20:55.252117 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-08-29 21:20:55.252124 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.252130 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.252137 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.252143 | orchestrator | 2025-08-29 21:20:55.252150 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-08-29 21:20:55.252157 | orchestrator | Friday 29 August 2025 21:19:21 +0000 (0:00:06.222) 0:06:59.737 ********* 2025-08-29 21:20:55.252163 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.252170 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.252176 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.252183 | orchestrator | 2025-08-29 21:20:55.252189 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-08-29 21:20:55.252196 | orchestrator | Friday 29 August 2025 21:19:22 +0000 (0:00:01.009) 0:07:00.746 ********* 2025-08-29 21:20:55.252213 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:20:55.252219 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:20:55.252226 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:20:55.252232 | orchestrator | 2025-08-29 21:20:55.252239 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-08-29 21:20:55.252246 | orchestrator | Friday 29 August 2025 21:19:49 +0000 (0:00:26.888) 0:07:27.635 ********* 2025-08-29 21:20:55.252252 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.252259 | orchestrator | 2025-08-29 21:20:55.252265 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-08-29 21:20:55.252272 | orchestrator | Friday 29 August 2025 21:19:49 +0000 (0:00:00.129) 0:07:27.764 ********* 2025-08-29 21:20:55.252278 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.252285 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.252291 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.252298 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.252305 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.252311 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-08-29 21:20:55.252318 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:20:55.252325 | orchestrator | 2025-08-29 21:20:55.252336 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-08-29 21:20:55.252343 | orchestrator | Friday 29 August 2025 21:20:11 +0000 (0:00:21.969) 0:07:49.734 ********* 2025-08-29 21:20:55.252349 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.252375 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.252382 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.252389 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.252395 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.252402 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.252408 | orchestrator | 2025-08-29 21:20:55.252415 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-08-29 21:20:55.252422 | orchestrator | Friday 29 August 2025 21:20:18 +0000 (0:00:06.982) 0:07:56.717 ********* 2025-08-29 21:20:55.252429 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.252435 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.252442 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.252448 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.252455 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.252462 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-08-29 21:20:55.252468 | orchestrator | 2025-08-29 21:20:55.252475 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 21:20:55.252481 | orchestrator | Friday 29 August 2025 21:20:22 +0000 (0:00:03.499) 0:08:00.216 ********* 2025-08-29 21:20:55.252488 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:20:55.252494 | orchestrator | 2025-08-29 21:20:55.252501 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 21:20:55.252508 | orchestrator | Friday 29 August 2025 21:20:34 +0000 (0:00:12.377) 0:08:12.593 ********* 2025-08-29 21:20:55.252514 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:20:55.252521 | orchestrator | 2025-08-29 21:20:55.252527 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-08-29 21:20:55.252534 | orchestrator | Friday 29 August 2025 21:20:35 +0000 (0:00:01.310) 0:08:13.904 ********* 2025-08-29 21:20:55.252540 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.252547 | orchestrator | 2025-08-29 21:20:55.252553 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-08-29 21:20:55.252560 | orchestrator | Friday 29 August 2025 21:20:37 +0000 (0:00:01.238) 0:08:15.142 ********* 2025-08-29 21:20:55.252567 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:20:55.252577 | orchestrator | 2025-08-29 21:20:55.252584 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-08-29 21:20:55.252590 | orchestrator | Friday 29 August 2025 21:20:47 +0000 (0:00:10.116) 0:08:25.259 ********* 2025-08-29 21:20:55.252597 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:20:55.252604 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:20:55.252610 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:20:55.252617 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:20:55.252623 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:20:55.252630 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:20:55.252636 | orchestrator | 2025-08-29 21:20:55.252643 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-08-29 21:20:55.252650 | orchestrator | 2025-08-29 21:20:55.252656 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-08-29 21:20:55.252663 | orchestrator | Friday 29 August 2025 21:20:48 +0000 (0:00:01.681) 0:08:26.940 ********* 2025-08-29 21:20:55.252669 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:20:55.252676 | orchestrator | changed: [testbed-node-1] 2025-08-29 21:20:55.252683 | orchestrator | changed: [testbed-node-2] 2025-08-29 21:20:55.252689 | orchestrator | 2025-08-29 21:20:55.252696 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-08-29 21:20:55.252702 | orchestrator | 2025-08-29 21:20:55.252709 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-08-29 21:20:55.252715 | orchestrator | Friday 29 August 2025 21:20:49 +0000 (0:00:00.924) 0:08:27.865 ********* 2025-08-29 21:20:55.252722 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.252728 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.252735 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.252741 | orchestrator | 2025-08-29 21:20:55.252748 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-08-29 21:20:55.252755 | orchestrator | 2025-08-29 21:20:55.252761 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-08-29 21:20:55.252768 | orchestrator | Friday 29 August 2025 21:20:50 +0000 (0:00:00.676) 0:08:28.541 ********* 2025-08-29 21:20:55.252774 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-08-29 21:20:55.252781 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 21:20:55.252791 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 21:20:55.252797 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-08-29 21:20:55.252804 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-08-29 21:20:55.252811 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-08-29 21:20:55.252817 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:20:55.252824 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-08-29 21:20:55.252830 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 21:20:55.252842 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 21:20:55.252853 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-08-29 21:20:55.252864 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-08-29 21:20:55.252875 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-08-29 21:20:55.252887 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:20:55.252898 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-08-29 21:20:55.252910 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 21:20:55.252921 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 21:20:55.252934 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-08-29 21:20:55.252941 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-08-29 21:20:55.252948 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-08-29 21:20:55.252960 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:20:55.252967 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-08-29 21:20:55.252973 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 21:20:55.252980 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 21:20:55.252987 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-08-29 21:20:55.252993 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-08-29 21:20:55.253000 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-08-29 21:20:55.253006 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.253013 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-08-29 21:20:55.253020 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 21:20:55.253026 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 21:20:55.253033 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-08-29 21:20:55.253040 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-08-29 21:20:55.253046 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-08-29 21:20:55.253053 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.253060 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-08-29 21:20:55.253066 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 21:20:55.253073 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 21:20:55.253080 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-08-29 21:20:55.253086 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-08-29 21:20:55.253093 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-08-29 21:20:55.253099 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.253106 | orchestrator | 2025-08-29 21:20:55.253113 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-08-29 21:20:55.253119 | orchestrator | 2025-08-29 21:20:55.253126 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-08-29 21:20:55.253133 | orchestrator | Friday 29 August 2025 21:20:51 +0000 (0:00:01.257) 0:08:29.798 ********* 2025-08-29 21:20:55.253139 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-08-29 21:20:55.253146 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-08-29 21:20:55.253153 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.253159 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-08-29 21:20:55.253166 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-08-29 21:20:55.253173 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.253179 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-08-29 21:20:55.253186 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-08-29 21:20:55.253193 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.253199 | orchestrator | 2025-08-29 21:20:55.253206 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-08-29 21:20:55.253213 | orchestrator | 2025-08-29 21:20:55.253219 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-08-29 21:20:55.253226 | orchestrator | Friday 29 August 2025 21:20:52 +0000 (0:00:00.527) 0:08:30.326 ********* 2025-08-29 21:20:55.253233 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.253239 | orchestrator | 2025-08-29 21:20:55.253246 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-08-29 21:20:55.253253 | orchestrator | 2025-08-29 21:20:55.253259 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-08-29 21:20:55.253266 | orchestrator | Friday 29 August 2025 21:20:53 +0000 (0:00:00.821) 0:08:31.147 ********* 2025-08-29 21:20:55.253273 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:20:55.253279 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:20:55.253290 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:20:55.253296 | orchestrator | 2025-08-29 21:20:55.253303 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:20:55.253310 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:20:55.253321 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-08-29 21:20:55.253328 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 21:20:55.253335 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 21:20:55.253342 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 21:20:55.253348 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-08-29 21:20:55.253374 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-08-29 21:20:55.253382 | orchestrator | 2025-08-29 21:20:55.253388 | orchestrator | 2025-08-29 21:20:55.253395 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:20:55.253402 | orchestrator | Friday 29 August 2025 21:20:53 +0000 (0:00:00.421) 0:08:31.569 ********* 2025-08-29 21:20:55.253408 | orchestrator | =============================================================================== 2025-08-29 21:20:55.253415 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 41.80s 2025-08-29 21:20:55.253422 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 35.69s 2025-08-29 21:20:55.253428 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.89s 2025-08-29 21:20:55.253435 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.32s 2025-08-29 21:20:55.253441 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.97s 2025-08-29 21:20:55.253448 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.48s 2025-08-29 21:20:55.253455 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.69s 2025-08-29 21:20:55.253461 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.65s 2025-08-29 21:20:55.253468 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.22s 2025-08-29 21:20:55.253474 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.92s 2025-08-29 21:20:55.253481 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.38s 2025-08-29 21:20:55.253487 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.12s 2025-08-29 21:20:55.253494 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.76s 2025-08-29 21:20:55.253501 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.47s 2025-08-29 21:20:55.253507 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.12s 2025-08-29 21:20:55.253514 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.47s 2025-08-29 21:20:55.253521 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.68s 2025-08-29 21:20:55.253527 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.63s 2025-08-29 21:20:55.253534 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 7.11s 2025-08-29 21:20:55.253540 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.11s 2025-08-29 21:20:55.253551 | orchestrator | 2025-08-29 21:20:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:20:58.276562 | orchestrator | 2025-08-29 21:20:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:01.317621 | orchestrator | 2025-08-29 21:21:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:04.359795 | orchestrator | 2025-08-29 21:21:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:07.401414 | orchestrator | 2025-08-29 21:21:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:10.437450 | orchestrator | 2025-08-29 21:21:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:13.482883 | orchestrator | 2025-08-29 21:21:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:16.524579 | orchestrator | 2025-08-29 21:21:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:19.588434 | orchestrator | 2025-08-29 21:21:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:22.628309 | orchestrator | 2025-08-29 21:21:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:25.668516 | orchestrator | 2025-08-29 21:21:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:28.718592 | orchestrator | 2025-08-29 21:21:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:31.761708 | orchestrator | 2025-08-29 21:21:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:34.801292 | orchestrator | 2025-08-29 21:21:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:37.841852 | orchestrator | 2025-08-29 21:21:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:40.881123 | orchestrator | 2025-08-29 21:21:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:43.915318 | orchestrator | 2025-08-29 21:21:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:46.953335 | orchestrator | 2025-08-29 21:21:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:49.986951 | orchestrator | 2025-08-29 21:21:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:53.030150 | orchestrator | 2025-08-29 21:21:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 21:21:56.066099 | orchestrator | 2025-08-29 21:21:56.254008 | orchestrator | 2025-08-29 21:21:56.259086 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Aug 29 21:21:56 UTC 2025 2025-08-29 21:21:56.259160 | orchestrator | 2025-08-29 21:21:56.613140 | orchestrator | ok: Runtime: 0:34:03.767535 2025-08-29 21:21:56.861811 | 2025-08-29 21:21:56.861969 | TASK [Bootstrap services] 2025-08-29 21:21:57.520231 | orchestrator | 2025-08-29 21:21:57.520413 | orchestrator | # BOOTSTRAP 2025-08-29 21:21:57.520439 | orchestrator | 2025-08-29 21:21:57.520453 | orchestrator | + set -e 2025-08-29 21:21:57.520467 | orchestrator | + echo 2025-08-29 21:21:57.520480 | orchestrator | + echo '# BOOTSTRAP' 2025-08-29 21:21:57.520497 | orchestrator | + echo 2025-08-29 21:21:57.520539 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-08-29 21:21:57.528208 | orchestrator | + set -e 2025-08-29 21:21:57.528240 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-08-29 21:22:00.848801 | orchestrator | 2025-08-29 21:22:00 | INFO  | It takes a moment until task b09d539b-17c7-40bb-81e2-ab10013cab09 (flavor-manager) has been started and output is visible here. 2025-08-29 21:22:07.955687 | orchestrator | 2025-08-29 21:22:04 | INFO  | Flavor SCS-1V-4 created 2025-08-29 21:22:07.955790 | orchestrator | 2025-08-29 21:22:04 | INFO  | Flavor SCS-2V-8 created 2025-08-29 21:22:07.955807 | orchestrator | 2025-08-29 21:22:04 | INFO  | Flavor SCS-4V-16 created 2025-08-29 21:22:07.955820 | orchestrator | 2025-08-29 21:22:04 | INFO  | Flavor SCS-8V-32 created 2025-08-29 21:22:07.955831 | orchestrator | 2025-08-29 21:22:05 | INFO  | Flavor SCS-1V-2 created 2025-08-29 21:22:07.955842 | orchestrator | 2025-08-29 21:22:05 | INFO  | Flavor SCS-2V-4 created 2025-08-29 21:22:07.955853 | orchestrator | 2025-08-29 21:22:05 | INFO  | Flavor SCS-4V-8 created 2025-08-29 21:22:07.955865 | orchestrator | 2025-08-29 21:22:05 | INFO  | Flavor SCS-8V-16 created 2025-08-29 21:22:07.955887 | orchestrator | 2025-08-29 21:22:05 | INFO  | Flavor SCS-16V-32 created 2025-08-29 21:22:07.955899 | orchestrator | 2025-08-29 21:22:05 | INFO  | Flavor SCS-1V-8 created 2025-08-29 21:22:07.955909 | orchestrator | 2025-08-29 21:22:05 | INFO  | Flavor SCS-2V-16 created 2025-08-29 21:22:07.955920 | orchestrator | 2025-08-29 21:22:05 | INFO  | Flavor SCS-4V-32 created 2025-08-29 21:22:07.955931 | orchestrator | 2025-08-29 21:22:06 | INFO  | Flavor SCS-1L-1 created 2025-08-29 21:22:07.955942 | orchestrator | 2025-08-29 21:22:06 | INFO  | Flavor SCS-2V-4-20s created 2025-08-29 21:22:07.955952 | orchestrator | 2025-08-29 21:22:06 | INFO  | Flavor SCS-4V-16-100s created 2025-08-29 21:22:07.955963 | orchestrator | 2025-08-29 21:22:06 | INFO  | Flavor SCS-1V-4-10 created 2025-08-29 21:22:07.955974 | orchestrator | 2025-08-29 21:22:06 | INFO  | Flavor SCS-2V-8-20 created 2025-08-29 21:22:07.955984 | orchestrator | 2025-08-29 21:22:06 | INFO  | Flavor SCS-4V-16-50 created 2025-08-29 21:22:07.955995 | orchestrator | 2025-08-29 21:22:06 | INFO  | Flavor SCS-8V-32-100 created 2025-08-29 21:22:07.956006 | orchestrator | 2025-08-29 21:22:06 | INFO  | Flavor SCS-1V-2-5 created 2025-08-29 21:22:07.956016 | orchestrator | 2025-08-29 21:22:06 | INFO  | Flavor SCS-2V-4-10 created 2025-08-29 21:22:07.956026 | orchestrator | 2025-08-29 21:22:07 | INFO  | Flavor SCS-4V-8-20 created 2025-08-29 21:22:07.956038 | orchestrator | 2025-08-29 21:22:07 | INFO  | Flavor SCS-8V-16-50 created 2025-08-29 21:22:07.956048 | orchestrator | 2025-08-29 21:22:07 | INFO  | Flavor SCS-16V-32-100 created 2025-08-29 21:22:07.956059 | orchestrator | 2025-08-29 21:22:07 | INFO  | Flavor SCS-1V-8-20 created 2025-08-29 21:22:07.956070 | orchestrator | 2025-08-29 21:22:07 | INFO  | Flavor SCS-2V-16-50 created 2025-08-29 21:22:07.956080 | orchestrator | 2025-08-29 21:22:07 | INFO  | Flavor SCS-4V-32-100 created 2025-08-29 21:22:07.956091 | orchestrator | 2025-08-29 21:22:07 | INFO  | Flavor SCS-1L-1-5 created 2025-08-29 21:22:09.973944 | orchestrator | 2025-08-29 21:22:09 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-08-29 21:22:20.125109 | orchestrator | 2025-08-29 21:22:20 | INFO  | Task a3d51a04-c6ee-4834-be9e-4e3d342d7975 (bootstrap-basic) was prepared for execution. 2025-08-29 21:22:20.125191 | orchestrator | 2025-08-29 21:22:20 | INFO  | It takes a moment until task a3d51a04-c6ee-4834-be9e-4e3d342d7975 (bootstrap-basic) has been started and output is visible here. 2025-08-29 21:23:18.534493 | orchestrator | 2025-08-29 21:23:18.534616 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-08-29 21:23:18.534630 | orchestrator | 2025-08-29 21:23:18.534641 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 21:23:18.534658 | orchestrator | Friday 29 August 2025 21:22:23 +0000 (0:00:00.067) 0:00:00.067 ********* 2025-08-29 21:23:18.534667 | orchestrator | ok: [localhost] 2025-08-29 21:23:18.534677 | orchestrator | 2025-08-29 21:23:18.534685 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-08-29 21:23:18.534696 | orchestrator | Friday 29 August 2025 21:22:25 +0000 (0:00:01.586) 0:00:01.653 ********* 2025-08-29 21:23:18.534705 | orchestrator | ok: [localhost] 2025-08-29 21:23:18.534713 | orchestrator | 2025-08-29 21:23:18.534721 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-08-29 21:23:18.534729 | orchestrator | Friday 29 August 2025 21:22:32 +0000 (0:00:07.462) 0:00:09.116 ********* 2025-08-29 21:23:18.534737 | orchestrator | changed: [localhost] 2025-08-29 21:23:18.534745 | orchestrator | 2025-08-29 21:23:18.534753 | orchestrator | TASK [Get volume type local] *************************************************** 2025-08-29 21:23:18.534761 | orchestrator | Friday 29 August 2025 21:22:40 +0000 (0:00:07.537) 0:00:16.654 ********* 2025-08-29 21:23:18.534769 | orchestrator | ok: [localhost] 2025-08-29 21:23:18.534777 | orchestrator | 2025-08-29 21:23:18.534786 | orchestrator | TASK [Create volume type local] ************************************************ 2025-08-29 21:23:18.534794 | orchestrator | Friday 29 August 2025 21:22:46 +0000 (0:00:06.098) 0:00:22.752 ********* 2025-08-29 21:23:18.534802 | orchestrator | changed: [localhost] 2025-08-29 21:23:18.534813 | orchestrator | 2025-08-29 21:23:18.534821 | orchestrator | TASK [Create public network] *************************************************** 2025-08-29 21:23:18.534829 | orchestrator | Friday 29 August 2025 21:22:52 +0000 (0:00:06.253) 0:00:29.006 ********* 2025-08-29 21:23:18.534837 | orchestrator | changed: [localhost] 2025-08-29 21:23:18.534845 | orchestrator | 2025-08-29 21:23:18.534853 | orchestrator | TASK [Set public network to default] ******************************************* 2025-08-29 21:23:18.534861 | orchestrator | Friday 29 August 2025 21:22:58 +0000 (0:00:06.123) 0:00:35.130 ********* 2025-08-29 21:23:18.534868 | orchestrator | changed: [localhost] 2025-08-29 21:23:18.534876 | orchestrator | 2025-08-29 21:23:18.534892 | orchestrator | TASK [Create public subnet] **************************************************** 2025-08-29 21:23:18.534900 | orchestrator | Friday 29 August 2025 21:23:05 +0000 (0:00:06.954) 0:00:42.085 ********* 2025-08-29 21:23:18.534908 | orchestrator | changed: [localhost] 2025-08-29 21:23:18.534916 | orchestrator | 2025-08-29 21:23:18.534933 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-08-29 21:23:18.534941 | orchestrator | Friday 29 August 2025 21:23:10 +0000 (0:00:04.521) 0:00:46.607 ********* 2025-08-29 21:23:18.534949 | orchestrator | changed: [localhost] 2025-08-29 21:23:18.534957 | orchestrator | 2025-08-29 21:23:18.534965 | orchestrator | TASK [Create manager role] ***************************************************** 2025-08-29 21:23:18.534973 | orchestrator | Friday 29 August 2025 21:23:14 +0000 (0:00:04.623) 0:00:51.231 ********* 2025-08-29 21:23:18.534981 | orchestrator | ok: [localhost] 2025-08-29 21:23:18.534990 | orchestrator | 2025-08-29 21:23:18.535000 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:23:18.535009 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:23:18.535019 | orchestrator | 2025-08-29 21:23:18.535027 | orchestrator | 2025-08-29 21:23:18.535043 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:23:18.535052 | orchestrator | Friday 29 August 2025 21:23:18 +0000 (0:00:03.422) 0:00:54.653 ********* 2025-08-29 21:23:18.535084 | orchestrator | =============================================================================== 2025-08-29 21:23:18.535093 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.54s 2025-08-29 21:23:18.535102 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.46s 2025-08-29 21:23:18.535111 | orchestrator | Set public network to default ------------------------------------------- 6.95s 2025-08-29 21:23:18.535120 | orchestrator | Create volume type local ------------------------------------------------ 6.25s 2025-08-29 21:23:18.535129 | orchestrator | Create public network --------------------------------------------------- 6.12s 2025-08-29 21:23:18.535138 | orchestrator | Get volume type local --------------------------------------------------- 6.10s 2025-08-29 21:23:18.535147 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.62s 2025-08-29 21:23:18.535155 | orchestrator | Create public subnet ---------------------------------------------------- 4.52s 2025-08-29 21:23:18.535164 | orchestrator | Create manager role ----------------------------------------------------- 3.42s 2025-08-29 21:23:18.535172 | orchestrator | Gathering Facts --------------------------------------------------------- 1.59s 2025-08-29 21:23:20.740110 | orchestrator | 2025-08-29 21:23:20 | INFO  | It takes a moment until task 997bacff-3743-4d0a-ba6f-87ee0ea8329c (image-manager) has been started and output is visible here. 2025-08-29 21:24:00.790583 | orchestrator | 2025-08-29 21:23:24 | INFO  | Processing image 'Cirros 0.6.2' 2025-08-29 21:24:00.790693 | orchestrator | 2025-08-29 21:23:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-08-29 21:24:00.790711 | orchestrator | 2025-08-29 21:23:24 | INFO  | Importing image Cirros 0.6.2 2025-08-29 21:24:00.790722 | orchestrator | 2025-08-29 21:23:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-08-29 21:24:00.790734 | orchestrator | 2025-08-29 21:23:26 | INFO  | Waiting for image to leave queued state... 2025-08-29 21:24:00.790744 | orchestrator | 2025-08-29 21:23:28 | INFO  | Waiting for import to complete... 2025-08-29 21:24:00.790754 | orchestrator | 2025-08-29 21:23:38 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-08-29 21:24:00.790764 | orchestrator | 2025-08-29 21:23:38 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-08-29 21:24:00.790773 | orchestrator | 2025-08-29 21:23:38 | INFO  | Setting internal_version = 0.6.2 2025-08-29 21:24:00.790783 | orchestrator | 2025-08-29 21:23:38 | INFO  | Setting image_original_user = cirros 2025-08-29 21:24:00.790793 | orchestrator | 2025-08-29 21:23:38 | INFO  | Adding tag os:cirros 2025-08-29 21:24:00.790803 | orchestrator | 2025-08-29 21:23:39 | INFO  | Setting property architecture: x86_64 2025-08-29 21:24:00.790812 | orchestrator | 2025-08-29 21:23:39 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 21:24:00.790822 | orchestrator | 2025-08-29 21:23:39 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 21:24:00.790831 | orchestrator | 2025-08-29 21:23:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 21:24:00.790841 | orchestrator | 2025-08-29 21:23:40 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 21:24:00.790850 | orchestrator | 2025-08-29 21:23:40 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 21:24:00.790859 | orchestrator | 2025-08-29 21:23:40 | INFO  | Setting property os_distro: cirros 2025-08-29 21:24:00.790869 | orchestrator | 2025-08-29 21:23:40 | INFO  | Setting property replace_frequency: never 2025-08-29 21:24:00.790878 | orchestrator | 2025-08-29 21:23:40 | INFO  | Setting property uuid_validity: none 2025-08-29 21:24:00.790888 | orchestrator | 2025-08-29 21:23:41 | INFO  | Setting property provided_until: none 2025-08-29 21:24:00.790922 | orchestrator | 2025-08-29 21:23:41 | INFO  | Setting property image_description: Cirros 2025-08-29 21:24:00.790941 | orchestrator | 2025-08-29 21:23:41 | INFO  | Setting property image_name: Cirros 2025-08-29 21:24:00.790951 | orchestrator | 2025-08-29 21:23:41 | INFO  | Setting property internal_version: 0.6.2 2025-08-29 21:24:00.790965 | orchestrator | 2025-08-29 21:23:41 | INFO  | Setting property image_original_user: cirros 2025-08-29 21:24:00.790975 | orchestrator | 2025-08-29 21:23:41 | INFO  | Setting property os_version: 0.6.2 2025-08-29 21:24:00.790984 | orchestrator | 2025-08-29 21:23:42 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-08-29 21:24:00.790996 | orchestrator | 2025-08-29 21:23:42 | INFO  | Setting property image_build_date: 2023-05-30 2025-08-29 21:24:00.791005 | orchestrator | 2025-08-29 21:23:42 | INFO  | Checking status of 'Cirros 0.6.2' 2025-08-29 21:24:00.791014 | orchestrator | 2025-08-29 21:23:42 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-08-29 21:24:00.791023 | orchestrator | 2025-08-29 21:23:42 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-08-29 21:24:00.791033 | orchestrator | 2025-08-29 21:23:42 | INFO  | Processing image 'Cirros 0.6.3' 2025-08-29 21:24:00.791042 | orchestrator | 2025-08-29 21:23:43 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-08-29 21:24:00.791052 | orchestrator | 2025-08-29 21:23:43 | INFO  | Importing image Cirros 0.6.3 2025-08-29 21:24:00.791063 | orchestrator | 2025-08-29 21:23:43 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-08-29 21:24:00.791074 | orchestrator | 2025-08-29 21:23:44 | INFO  | Waiting for image to leave queued state... 2025-08-29 21:24:00.791086 | orchestrator | 2025-08-29 21:23:46 | INFO  | Waiting for import to complete... 2025-08-29 21:24:00.791097 | orchestrator | 2025-08-29 21:23:56 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-08-29 21:24:00.791124 | orchestrator | 2025-08-29 21:23:56 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-08-29 21:24:00.791136 | orchestrator | 2025-08-29 21:23:56 | INFO  | Setting internal_version = 0.6.3 2025-08-29 21:24:00.791147 | orchestrator | 2025-08-29 21:23:56 | INFO  | Setting image_original_user = cirros 2025-08-29 21:24:00.791158 | orchestrator | 2025-08-29 21:23:56 | INFO  | Adding tag os:cirros 2025-08-29 21:24:00.791169 | orchestrator | 2025-08-29 21:23:56 | INFO  | Setting property architecture: x86_64 2025-08-29 21:24:00.791180 | orchestrator | 2025-08-29 21:23:56 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 21:24:00.791191 | orchestrator | 2025-08-29 21:23:57 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 21:24:00.791202 | orchestrator | 2025-08-29 21:23:57 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 21:24:00.791213 | orchestrator | 2025-08-29 21:23:57 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 21:24:00.791225 | orchestrator | 2025-08-29 21:23:57 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 21:24:00.791236 | orchestrator | 2025-08-29 21:23:57 | INFO  | Setting property os_distro: cirros 2025-08-29 21:24:00.791247 | orchestrator | 2025-08-29 21:23:58 | INFO  | Setting property replace_frequency: never 2025-08-29 21:24:00.791258 | orchestrator | 2025-08-29 21:23:58 | INFO  | Setting property uuid_validity: none 2025-08-29 21:24:00.791276 | orchestrator | 2025-08-29 21:23:58 | INFO  | Setting property provided_until: none 2025-08-29 21:24:00.791287 | orchestrator | 2025-08-29 21:23:58 | INFO  | Setting property image_description: Cirros 2025-08-29 21:24:00.791298 | orchestrator | 2025-08-29 21:23:58 | INFO  | Setting property image_name: Cirros 2025-08-29 21:24:00.791310 | orchestrator | 2025-08-29 21:23:59 | INFO  | Setting property internal_version: 0.6.3 2025-08-29 21:24:00.791321 | orchestrator | 2025-08-29 21:23:59 | INFO  | Setting property image_original_user: cirros 2025-08-29 21:24:00.791332 | orchestrator | 2025-08-29 21:23:59 | INFO  | Setting property os_version: 0.6.3 2025-08-29 21:24:00.791343 | orchestrator | 2025-08-29 21:23:59 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-08-29 21:24:00.791354 | orchestrator | 2025-08-29 21:23:59 | INFO  | Setting property image_build_date: 2024-09-26 2025-08-29 21:24:00.791365 | orchestrator | 2025-08-29 21:24:00 | INFO  | Checking status of 'Cirros 0.6.3' 2025-08-29 21:24:00.791376 | orchestrator | 2025-08-29 21:24:00 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-08-29 21:24:00.791391 | orchestrator | 2025-08-29 21:24:00 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-08-29 21:24:01.058973 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-08-29 21:24:02.996355 | orchestrator | 2025-08-29 21:24:02 | INFO  | date: 2025-08-29 2025-08-29 21:24:02.996959 | orchestrator | 2025-08-29 21:24:02 | INFO  | image: octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 21:24:02.996989 | orchestrator | 2025-08-29 21:24:02 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 21:24:02.997022 | orchestrator | 2025-08-29 21:24:02 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2.CHECKSUM 2025-08-29 21:24:03.028588 | orchestrator | 2025-08-29 21:24:03 | INFO  | checksum: 9bd11944634778935b43eb626302bc74d657e4c319fdb6fd625fdfeb36ffc69d 2025-08-29 21:24:03.115201 | orchestrator | 2025-08-29 21:24:03 | INFO  | It takes a moment until task c01bb07d-b2f7-460f-8771-0cabdaee2a9d (image-manager) has been started and output is visible here. 2025-08-29 21:25:03.410966 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-29 21:25:03.411099 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-08-29 21:25:03.411119 | orchestrator | 2025-08-29 21:24:05 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 21:25:03.411137 | orchestrator | 2025-08-29 21:24:05 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2: 200 2025-08-29 21:25:03.411152 | orchestrator | 2025-08-29 21:24:05 | INFO  | Importing image OpenStack Octavia Amphora 2025-08-29 2025-08-29 21:25:03.411164 | orchestrator | 2025-08-29 21:24:05 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 21:25:03.411177 | orchestrator | 2025-08-29 21:24:05 | INFO  | Waiting for image to leave queued state... 2025-08-29 21:25:03.411216 | orchestrator | 2025-08-29 21:24:07 | INFO  | Waiting for import to complete... 2025-08-29 21:25:03.411228 | orchestrator | 2025-08-29 21:24:17 | INFO  | Waiting for import to complete... 2025-08-29 21:25:03.411238 | orchestrator | 2025-08-29 21:24:28 | INFO  | Waiting for import to complete... 2025-08-29 21:25:03.411249 | orchestrator | 2025-08-29 21:24:38 | INFO  | Waiting for import to complete... 2025-08-29 21:25:03.411260 | orchestrator | 2025-08-29 21:24:48 | INFO  | Waiting for import to complete... 2025-08-29 21:25:03.411271 | orchestrator | 2025-08-29 21:24:58 | INFO  | Import of 'OpenStack Octavia Amphora 2025-08-29' successfully completed, reloading images 2025-08-29 21:25:03.411282 | orchestrator | 2025-08-29 21:24:58 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 21:25:03.411293 | orchestrator | 2025-08-29 21:24:58 | INFO  | Setting internal_version = 2025-08-29 2025-08-29 21:25:03.411304 | orchestrator | 2025-08-29 21:24:58 | INFO  | Setting image_original_user = ubuntu 2025-08-29 21:25:03.411315 | orchestrator | 2025-08-29 21:24:58 | INFO  | Adding tag amphora 2025-08-29 21:25:03.411326 | orchestrator | 2025-08-29 21:24:59 | INFO  | Adding tag os:ubuntu 2025-08-29 21:25:03.411336 | orchestrator | 2025-08-29 21:24:59 | INFO  | Setting property architecture: x86_64 2025-08-29 21:25:03.411347 | orchestrator | 2025-08-29 21:24:59 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 21:25:03.411358 | orchestrator | 2025-08-29 21:24:59 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 21:25:03.411378 | orchestrator | 2025-08-29 21:24:59 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 21:25:03.411390 | orchestrator | 2025-08-29 21:25:00 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 21:25:03.411401 | orchestrator | 2025-08-29 21:25:00 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 21:25:03.411411 | orchestrator | 2025-08-29 21:25:00 | INFO  | Setting property os_distro: ubuntu 2025-08-29 21:25:03.411422 | orchestrator | 2025-08-29 21:25:00 | INFO  | Setting property replace_frequency: quarterly 2025-08-29 21:25:03.411433 | orchestrator | 2025-08-29 21:25:00 | INFO  | Setting property uuid_validity: last-1 2025-08-29 21:25:03.411443 | orchestrator | 2025-08-29 21:25:01 | INFO  | Setting property provided_until: none 2025-08-29 21:25:03.411454 | orchestrator | 2025-08-29 21:25:01 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-08-29 21:25:03.411465 | orchestrator | 2025-08-29 21:25:01 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-08-29 21:25:03.411477 | orchestrator | 2025-08-29 21:25:01 | INFO  | Setting property internal_version: 2025-08-29 2025-08-29 21:25:03.411489 | orchestrator | 2025-08-29 21:25:02 | INFO  | Setting property image_original_user: ubuntu 2025-08-29 21:25:03.411501 | orchestrator | 2025-08-29 21:25:02 | INFO  | Setting property os_version: 2025-08-29 2025-08-29 21:25:03.411514 | orchestrator | 2025-08-29 21:25:02 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 21:25:03.411579 | orchestrator | 2025-08-29 21:25:02 | INFO  | Setting property image_build_date: 2025-08-29 2025-08-29 21:25:03.411594 | orchestrator | 2025-08-29 21:25:03 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 21:25:03.411607 | orchestrator | 2025-08-29 21:25:03 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 21:25:03.411629 | orchestrator | 2025-08-29 21:25:03 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-08-29 21:25:03.411642 | orchestrator | 2025-08-29 21:25:03 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-08-29 21:25:03.411655 | orchestrator | 2025-08-29 21:25:03 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-08-29 21:25:03.411669 | orchestrator | 2025-08-29 21:25:03 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-08-29 21:25:04.026253 | orchestrator | ok: Runtime: 0:03:06.538861 2025-08-29 21:25:04.050218 | 2025-08-29 21:25:04.050428 | TASK [Run checks] 2025-08-29 21:25:04.764531 | orchestrator | + set -e 2025-08-29 21:25:04.764703 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 21:25:04.764726 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 21:25:04.764747 | orchestrator | ++ INTERACTIVE=false 2025-08-29 21:25:04.764761 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 21:25:04.764774 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 21:25:04.764799 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 21:25:04.765646 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 21:25:04.771391 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 21:25:04.771422 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 21:25:04.771436 | orchestrator | + echo 2025-08-29 21:25:04.771452 | orchestrator | 2025-08-29 21:25:04.771464 | orchestrator | # CHECK 2025-08-29 21:25:04.771476 | orchestrator | 2025-08-29 21:25:04.771497 | orchestrator | + echo '# CHECK' 2025-08-29 21:25:04.771508 | orchestrator | + echo 2025-08-29 21:25:04.771523 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 21:25:04.772439 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 21:25:04.832390 | orchestrator | 2025-08-29 21:25:04.832423 | orchestrator | ## Containers @ testbed-manager 2025-08-29 21:25:04.832435 | orchestrator | 2025-08-29 21:25:04.832447 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 21:25:04.832458 | orchestrator | + echo 2025-08-29 21:25:04.832470 | orchestrator | + echo '## Containers @ testbed-manager' 2025-08-29 21:25:04.832481 | orchestrator | + echo 2025-08-29 21:25:04.832492 | orchestrator | + osism container testbed-manager ps 2025-08-29 21:25:06.816387 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 21:25:06.816488 | orchestrator | 3032ce87240a registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_blackbox_exporter 2025-08-29 21:25:06.816510 | orchestrator | f4dba9391016 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_alertmanager 2025-08-29 21:25:06.816527 | orchestrator | 15f5f42896ff registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-08-29 21:25:06.816538 | orchestrator | 79a437ecb355 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-08-29 21:25:06.816576 | orchestrator | 1abc4ed4535e registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711 "dumb-init --single-…" 13 minutes ago Up 12 minutes prometheus_server 2025-08-29 21:25:06.816596 | orchestrator | 4215e45653ad registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-08-29 21:25:06.816619 | orchestrator | e31ac41155d3 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-08-29 21:25:06.816636 | orchestrator | 36cb31d9a1e2 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-08-29 21:25:06.816647 | orchestrator | b824c5b37bf4 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-08-29 21:25:06.816677 | orchestrator | 8b232c7cb6e4 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-08-29 21:25:06.816688 | orchestrator | 2ad963edf368 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 31 minutes openstackclient 2025-08-29 21:25:06.816699 | orchestrator | eaf65e0a3f80 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 32 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-08-29 21:25:06.816709 | orchestrator | 0ae45963329f registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-08-29 21:25:06.816724 | orchestrator | 0a0f4ba14874 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" 57 minutes ago Up 37 minutes (healthy) manager-inventory_reconciler-1 2025-08-29 21:25:06.816751 | orchestrator | 74a696788d6e registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-08-29 21:25:06.816762 | orchestrator | d32a5874a959 registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-08-29 21:25:06.816772 | orchestrator | 1be234448f65 registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) osism-ansible 2025-08-29 21:25:06.816782 | orchestrator | 97e21905c44b registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-08-29 21:25:06.816792 | orchestrator | 02e82334e740 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 57 minutes ago Up 38 minutes (healthy) 8000/tcp manager-ara-server-1 2025-08-29 21:25:06.816802 | orchestrator | 3ad284dc4597 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-openstack-1 2025-08-29 21:25:06.816812 | orchestrator | d5d4c38c5197 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" 57 minutes ago Up 38 minutes (healthy) 3306/tcp manager-mariadb-1 2025-08-29 21:25:06.816822 | orchestrator | f87a8ee9866f registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-08-29 21:25:06.816833 | orchestrator | dc40c29be7e4 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-listener-1 2025-08-29 21:25:06.816850 | orchestrator | b50f4202710d registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" 57 minutes ago Up 38 minutes (healthy) osismclient 2025-08-29 21:25:06.816860 | orchestrator | dee87a9f266e registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 57 minutes ago Up 38 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-08-29 21:25:06.816870 | orchestrator | 3fb682bea9ee registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 57 minutes ago Up 38 minutes (healthy) 6379/tcp manager-redis-1 2025-08-29 21:25:06.816880 | orchestrator | cb01dfe68e9d registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-beat-1 2025-08-29 21:25:06.816890 | orchestrator | 4a1786d51b8a registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-flower-1 2025-08-29 21:25:06.816900 | orchestrator | b5cca82e72a8 registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-08-29 21:25:06.988007 | orchestrator | 2025-08-29 21:25:06.988074 | orchestrator | ## Images @ testbed-manager 2025-08-29 21:25:06.988087 | orchestrator | 2025-08-29 21:25:06.988099 | orchestrator | + echo 2025-08-29 21:25:06.988111 | orchestrator | + echo '## Images @ testbed-manager' 2025-08-29 21:25:06.988122 | orchestrator | + echo 2025-08-29 21:25:06.988134 | orchestrator | + osism container testbed-manager images 2025-08-29 21:25:08.868357 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 21:25:08.868483 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 e303c4555969 14 hours ago 237MB 2025-08-29 21:25:08.868507 | orchestrator | registry.osism.tech/osism/osism-frontend latest d2e016114477 21 hours ago 236MB 2025-08-29 21:25:08.868582 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d3334946e20e 3 weeks ago 11.5MB 2025-08-29 21:25:08.868608 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250711.0 fcbac8373342 6 weeks ago 571MB 2025-08-29 21:25:08.868628 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 7 weeks ago 628MB 2025-08-29 21:25:08.868647 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 7 weeks ago 746MB 2025-08-29 21:25:08.868666 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 7 weeks ago 318MB 2025-08-29 21:25:08.868684 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250711 cb02c47a5187 7 weeks ago 891MB 2025-08-29 21:25:08.868699 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250711 0ac8facfe451 7 weeks ago 360MB 2025-08-29 21:25:08.868710 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250711 6c4eef6335f5 7 weeks ago 456MB 2025-08-29 21:25:08.868720 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 7 weeks ago 410MB 2025-08-29 21:25:08.868753 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 7 weeks ago 358MB 2025-08-29 21:25:08.868765 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250711.0 7b0f9e78b4e4 7 weeks ago 575MB 2025-08-29 21:25:08.868776 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250711.0 f677f8f8094b 7 weeks ago 535MB 2025-08-29 21:25:08.868787 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250711.0 8fcfa643b744 7 weeks ago 308MB 2025-08-29 21:25:08.868797 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250711.0 267f92fc46f6 7 weeks ago 1.21GB 2025-08-29 21:25:08.868808 | orchestrator | registry.osism.tech/osism/osism 0.20250709.0 ccd699d89870 7 weeks ago 310MB 2025-08-29 21:25:08.868818 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 7 weeks ago 41.4MB 2025-08-29 21:25:08.868829 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 months ago 226MB 2025-08-29 21:25:08.868840 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 dae0c92b7b63 2 months ago 329MB 2025-08-29 21:25:08.868850 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 3 months ago 453MB 2025-08-29 21:25:08.868861 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-08-29 21:25:08.868871 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 11 months ago 300MB 2025-08-29 21:25:08.868883 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 14 months ago 146MB 2025-08-29 21:25:09.040645 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 21:25:09.041148 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 21:25:09.097683 | orchestrator | 2025-08-29 21:25:09.097762 | orchestrator | ## Containers @ testbed-node-0 2025-08-29 21:25:09.097777 | orchestrator | 2025-08-29 21:25:09.097789 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 21:25:09.097801 | orchestrator | + echo 2025-08-29 21:25:09.097813 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-08-29 21:25:09.097825 | orchestrator | + echo 2025-08-29 21:25:09.097836 | orchestrator | + osism container testbed-node-0 ps 2025-08-29 21:25:11.117053 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 21:25:11.117150 | orchestrator | 6a72d10e23bc registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-08-29 21:25:11.117166 | orchestrator | 912a68b0dae6 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-08-29 21:25:11.117178 | orchestrator | 3b343dcfd637 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-08-29 21:25:11.117189 | orchestrator | 25baa4b5e374 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-08-29 21:25:11.117199 | orchestrator | 9835b39504e0 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-08-29 21:25:11.117230 | orchestrator | 4ae72923ad1e registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-08-29 21:25:11.117242 | orchestrator | b9938730716c registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-08-29 21:25:11.117273 | orchestrator | 78db694149a5 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-08-29 21:25:11.117285 | orchestrator | eeb693f4d169 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2025-08-29 21:25:11.117297 | orchestrator | 87b9de244a84 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-08-29 21:25:11.117308 | orchestrator | 8cb56c2aa1c8 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-08-29 21:25:11.117318 | orchestrator | 84bf372ce5a6 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-08-29 21:25:11.117329 | orchestrator | b1692fdc90d2 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-08-29 21:25:11.117340 | orchestrator | fb4d3eddb30c registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-08-29 21:25:11.117351 | orchestrator | 18015f6473be registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 13 minutes (healthy) magnum_api 2025-08-29 21:25:11.117362 | orchestrator | 8419d5ea90fd registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-08-29 21:25:11.117372 | orchestrator | 3f23173a20e9 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-08-29 21:25:11.117383 | orchestrator | 6bb31642b3b1 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-08-29 21:25:11.117394 | orchestrator | 56ce1b1f94ce registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-08-29 21:25:11.117421 | orchestrator | 4d5efecdaffd registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-08-29 21:25:11.117432 | orchestrator | 4ff4587a1ba0 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-08-29 21:25:11.117443 | orchestrator | 4f80b2f14a6d registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-08-29 21:25:11.117454 | orchestrator | 98511d94d86e registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-08-29 21:25:11.117465 | orchestrator | 59e0896790c2 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-08-29 21:25:11.117481 | orchestrator | 0fa153d8998f registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-08-29 21:25:11.117501 | orchestrator | 878b07babc93 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-08-29 21:25:11.117512 | orchestrator | a00ba5621d2f registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) barbican_api 2025-08-29 21:25:11.117522 | orchestrator | b444edcf5003 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-08-29 21:25:11.117538 | orchestrator | d2c80219636d registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-08-29 21:25:11.117549 | orchestrator | 8787c186077d registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-08-29 21:25:11.117585 | orchestrator | 6603813da0c5 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-08-29 21:25:11.117596 | orchestrator | da933d574cbe registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-08-29 21:25:11.117607 | orchestrator | 37a42bf0aba4 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-08-29 21:25:11.117618 | orchestrator | 84d9e12d58eb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-08-29 21:25:11.117629 | orchestrator | b7a584e6967b registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-08-29 21:25:11.117645 | orchestrator | b5fde68e27c7 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-08-29 21:25:11.117656 | orchestrator | 041c8fec9720 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-08-29 21:25:11.117667 | orchestrator | 5c873f614b89 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-08-29 21:25:11.117677 | orchestrator | 7ec69a880ad0 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-08-29 21:25:11.117688 | orchestrator | ce6daa0e92f5 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-08-29 21:25:11.117707 | orchestrator | df954b89a8d1 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-08-29 21:25:11.117718 | orchestrator | 09b265cfb805 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-08-29 21:25:11.117729 | orchestrator | 2576333d06c7 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-08-29 21:25:11.117746 | orchestrator | 8589f42cfb7c registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-08-29 21:25:11.117757 | orchestrator | d019dbd925b2 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-08-29 21:25:11.117768 | orchestrator | ebe146190d49 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-08-29 21:25:11.117779 | orchestrator | d356faee8cd3 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-08-29 21:25:11.117790 | orchestrator | b5b3b9270293 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-08-29 21:25:11.117801 | orchestrator | 7ece641d1128 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-08-29 21:25:11.117812 | orchestrator | 13155ec25282 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-08-29 21:25:11.117823 | orchestrator | 08c69bdf53ec registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-08-29 21:25:11.117834 | orchestrator | d242cf38d3ba registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-08-29 21:25:11.298484 | orchestrator | 2025-08-29 21:25:11.298547 | orchestrator | ## Images @ testbed-node-0 2025-08-29 21:25:11.298578 | orchestrator | 2025-08-29 21:25:11.298590 | orchestrator | + echo 2025-08-29 21:25:11.298602 | orchestrator | + echo '## Images @ testbed-node-0' 2025-08-29 21:25:11.298613 | orchestrator | + echo 2025-08-29 21:25:11.298625 | orchestrator | + osism container testbed-node-0 images 2025-08-29 21:25:13.311249 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 21:25:13.311390 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 7 weeks ago 628MB 2025-08-29 21:25:13.311405 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 7 weeks ago 329MB 2025-08-29 21:25:13.311417 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 7 weeks ago 326MB 2025-08-29 21:25:13.311427 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 7 weeks ago 1.59GB 2025-08-29 21:25:13.311438 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 7 weeks ago 1.55GB 2025-08-29 21:25:13.311448 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 7 weeks ago 417MB 2025-08-29 21:25:13.311459 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 7 weeks ago 318MB 2025-08-29 21:25:13.311470 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 7 weeks ago 746MB 2025-08-29 21:25:13.311480 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 7 weeks ago 375MB 2025-08-29 21:25:13.311491 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 7 weeks ago 1.01GB 2025-08-29 21:25:13.311501 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 7 weeks ago 318MB 2025-08-29 21:25:13.311547 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 7 weeks ago 361MB 2025-08-29 21:25:13.311588 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 7 weeks ago 361MB 2025-08-29 21:25:13.311599 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 7 weeks ago 1.21GB 2025-08-29 21:25:13.311610 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 7 weeks ago 353MB 2025-08-29 21:25:13.311634 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 7 weeks ago 410MB 2025-08-29 21:25:13.311646 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 7 weeks ago 344MB 2025-08-29 21:25:13.311658 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 7 weeks ago 358MB 2025-08-29 21:25:13.311668 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 7 weeks ago 351MB 2025-08-29 21:25:13.311679 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 7 weeks ago 324MB 2025-08-29 21:25:13.311689 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 7 weeks ago 324MB 2025-08-29 21:25:13.311700 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 7 weeks ago 590MB 2025-08-29 21:25:13.311711 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 7 weeks ago 946MB 2025-08-29 21:25:13.311722 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 7 weeks ago 947MB 2025-08-29 21:25:13.311732 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 7 weeks ago 947MB 2025-08-29 21:25:13.311743 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 7 weeks ago 946MB 2025-08-29 21:25:13.311753 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250711 05a4552273f6 7 weeks ago 1.04GB 2025-08-29 21:25:13.311764 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250711 41f8c34132c7 7 weeks ago 1.04GB 2025-08-29 21:25:13.311774 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 7 weeks ago 1.1GB 2025-08-29 21:25:13.311785 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 7 weeks ago 1.1GB 2025-08-29 21:25:13.311795 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 7 weeks ago 1.12GB 2025-08-29 21:25:13.311825 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 7 weeks ago 1.1GB 2025-08-29 21:25:13.311836 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 7 weeks ago 1.12GB 2025-08-29 21:25:13.311847 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 7 weeks ago 1.15GB 2025-08-29 21:25:13.311857 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 7 weeks ago 1.04GB 2025-08-29 21:25:13.311868 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 7 weeks ago 1.06GB 2025-08-29 21:25:13.311878 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 7 weeks ago 1.06GB 2025-08-29 21:25:13.311896 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 7 weeks ago 1.06GB 2025-08-29 21:25:13.311907 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 7 weeks ago 1.41GB 2025-08-29 21:25:13.311917 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 7 weeks ago 1.41GB 2025-08-29 21:25:13.311933 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 7 weeks ago 1.29GB 2025-08-29 21:25:13.311944 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 7 weeks ago 1.42GB 2025-08-29 21:25:13.311955 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 7 weeks ago 1.29GB 2025-08-29 21:25:13.311965 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 7 weeks ago 1.29GB 2025-08-29 21:25:13.311976 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 7 weeks ago 1.2GB 2025-08-29 21:25:13.311987 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 7 weeks ago 1.31GB 2025-08-29 21:25:13.311997 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 7 weeks ago 1.05GB 2025-08-29 21:25:13.312008 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 7 weeks ago 1.05GB 2025-08-29 21:25:13.312018 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 7 weeks ago 1.05GB 2025-08-29 21:25:13.312029 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 7 weeks ago 1.06GB 2025-08-29 21:25:13.312039 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 7 weeks ago 1.06GB 2025-08-29 21:25:13.312050 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 7 weeks ago 1.05GB 2025-08-29 21:25:13.312060 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250711 f2e37439c6b7 7 weeks ago 1.11GB 2025-08-29 21:25:13.312071 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250711 b3d19c53d4de 7 weeks ago 1.11GB 2025-08-29 21:25:13.312082 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 7 weeks ago 1.11GB 2025-08-29 21:25:13.312092 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 7 weeks ago 1.13GB 2025-08-29 21:25:13.312103 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 7 weeks ago 1.11GB 2025-08-29 21:25:13.312113 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 7 weeks ago 1.24GB 2025-08-29 21:25:13.312124 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250711 c26d685bbc69 7 weeks ago 1.04GB 2025-08-29 21:25:13.312135 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250711 55a7448b63ad 7 weeks ago 1.04GB 2025-08-29 21:25:13.312145 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250711 b8a4d60cb725 7 weeks ago 1.04GB 2025-08-29 21:25:13.312156 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250711 c0822bfcb81c 7 weeks ago 1.04GB 2025-08-29 21:25:13.312167 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 21:25:13.508412 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 21:25:13.508492 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 21:25:13.554609 | orchestrator | 2025-08-29 21:25:13.554637 | orchestrator | ## Containers @ testbed-node-1 2025-08-29 21:25:13.554648 | orchestrator | 2025-08-29 21:25:13.554660 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 21:25:13.554671 | orchestrator | + echo 2025-08-29 21:25:13.554683 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-08-29 21:25:13.554694 | orchestrator | + echo 2025-08-29 21:25:13.554705 | orchestrator | + osism container testbed-node-1 ps 2025-08-29 21:25:15.631957 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 21:25:15.632044 | orchestrator | 6b4e36fe5d12 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-08-29 21:25:15.632078 | orchestrator | f30235c4ee1c registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-08-29 21:25:15.632091 | orchestrator | 54d2e84d5811 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-08-29 21:25:15.632103 | orchestrator | 9f1ebd04a935 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-08-29 21:25:15.632114 | orchestrator | 9386c7a139ed registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-08-29 21:25:15.632124 | orchestrator | 02b365c09d1e registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-08-29 21:25:15.632135 | orchestrator | e5736b407a88 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-08-29 21:25:15.632146 | orchestrator | 2597a5a66cdd registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-08-29 21:25:15.632157 | orchestrator | 2eb66a5915ab registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-08-29 21:25:15.632170 | orchestrator | d7a8454c1210 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-08-29 21:25:15.632181 | orchestrator | 17fd138fc594 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-08-29 21:25:15.632191 | orchestrator | e380f5412203 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-08-29 21:25:15.632202 | orchestrator | 76550c0f993a registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-08-29 21:25:15.632213 | orchestrator | 75255c3a867d registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-08-29 21:25:15.632224 | orchestrator | ee4d494ad539 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-08-29 21:25:15.632255 | orchestrator | 28504c85a420 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-08-29 21:25:15.632266 | orchestrator | 698148cc4950 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-08-29 21:25:15.632276 | orchestrator | e6a91af9a08f registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-08-29 21:25:15.632287 | orchestrator | 3903502fb521 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-08-29 21:25:15.632313 | orchestrator | faa4406fac31 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-08-29 21:25:15.632324 | orchestrator | 108e69399781 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-08-29 21:25:15.632346 | orchestrator | 061d3366ccc5 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-08-29 21:25:15.632357 | orchestrator | 371e2da4ad37 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-08-29 21:25:15.632368 | orchestrator | 492d3fca6441 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-08-29 21:25:15.632379 | orchestrator | 0979cded4331 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-08-29 21:25:15.632390 | orchestrator | 010db559135c registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-08-29 21:25:15.632402 | orchestrator | 71865f125724 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-08-29 21:25:15.632413 | orchestrator | 95f4613fb47f registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-08-29 21:25:15.632424 | orchestrator | 2c241e5cf24f registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-08-29 21:25:15.632435 | orchestrator | 1c026824ece1 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-08-29 21:25:15.632446 | orchestrator | 490dba203d20 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-08-29 21:25:15.632457 | orchestrator | 8161097134a2 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-08-29 21:25:15.632468 | orchestrator | 820314de5519 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-08-29 21:25:15.632541 | orchestrator | 6411493994ac registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-08-29 21:25:15.632580 | orchestrator | 501e527b687e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-08-29 21:25:15.632593 | orchestrator | 8280060967ee registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-08-29 21:25:15.632606 | orchestrator | 500f155037b4 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 24 minutes ago Up 23 minutes (healthy) proxysql 2025-08-29 21:25:15.632618 | orchestrator | 8afb0c72f78a registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-08-29 21:25:15.632630 | orchestrator | d511f92b667e registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-08-29 21:25:15.632642 | orchestrator | 3c6897420184 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-08-29 21:25:15.632661 | orchestrator | 8e33e5aa1d6f registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-08-29 21:25:15.632674 | orchestrator | dc60ce39e44b registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-08-29 21:25:15.632686 | orchestrator | cfd7bc6dbbc0 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-08-29 21:25:15.632698 | orchestrator | 9852d91387b6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-08-29 21:25:15.632717 | orchestrator | 99073085fe9a registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-08-29 21:25:15.632729 | orchestrator | d553a0221fb1 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-08-29 21:25:15.632742 | orchestrator | 5d82983512cd registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-08-29 21:25:15.632754 | orchestrator | 82c06ec56c86 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-08-29 21:25:15.632766 | orchestrator | 2fa29a158ad7 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-08-29 21:25:15.632778 | orchestrator | c20df200804d registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-08-29 21:25:15.632790 | orchestrator | 800e140cc381 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-08-29 21:25:15.632802 | orchestrator | 675518c9bf96 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-08-29 21:25:15.885418 | orchestrator | 2025-08-29 21:25:15.885490 | orchestrator | ## Images @ testbed-node-1 2025-08-29 21:25:15.885499 | orchestrator | 2025-08-29 21:25:15.885506 | orchestrator | + echo 2025-08-29 21:25:15.885511 | orchestrator | + echo '## Images @ testbed-node-1' 2025-08-29 21:25:15.885518 | orchestrator | + echo 2025-08-29 21:25:15.885524 | orchestrator | + osism container testbed-node-1 images 2025-08-29 21:25:18.096771 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 21:25:18.096875 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 7 weeks ago 628MB 2025-08-29 21:25:18.096890 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 7 weeks ago 329MB 2025-08-29 21:25:18.096902 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 7 weeks ago 326MB 2025-08-29 21:25:18.096913 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 7 weeks ago 1.59GB 2025-08-29 21:25:18.096924 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 7 weeks ago 1.55GB 2025-08-29 21:25:18.096935 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 7 weeks ago 417MB 2025-08-29 21:25:18.096946 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 7 weeks ago 318MB 2025-08-29 21:25:18.096957 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 7 weeks ago 746MB 2025-08-29 21:25:18.096968 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 7 weeks ago 375MB 2025-08-29 21:25:18.096979 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 7 weeks ago 1.01GB 2025-08-29 21:25:18.096990 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 7 weeks ago 318MB 2025-08-29 21:25:18.097000 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 7 weeks ago 361MB 2025-08-29 21:25:18.097011 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 7 weeks ago 361MB 2025-08-29 21:25:18.097022 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 7 weeks ago 1.21GB 2025-08-29 21:25:18.097032 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 7 weeks ago 353MB 2025-08-29 21:25:18.097043 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 7 weeks ago 410MB 2025-08-29 21:25:18.097054 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 7 weeks ago 344MB 2025-08-29 21:25:18.097064 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 7 weeks ago 358MB 2025-08-29 21:25:18.097076 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 7 weeks ago 351MB 2025-08-29 21:25:18.097087 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 7 weeks ago 324MB 2025-08-29 21:25:18.097098 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 7 weeks ago 324MB 2025-08-29 21:25:18.097109 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 7 weeks ago 590MB 2025-08-29 21:25:18.097120 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 7 weeks ago 946MB 2025-08-29 21:25:18.097152 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 7 weeks ago 947MB 2025-08-29 21:25:18.097163 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 7 weeks ago 947MB 2025-08-29 21:25:18.097174 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 7 weeks ago 946MB 2025-08-29 21:25:18.097201 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 7 weeks ago 1.15GB 2025-08-29 21:25:18.097212 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 7 weeks ago 1.04GB 2025-08-29 21:25:18.097223 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 7 weeks ago 1.06GB 2025-08-29 21:25:18.097233 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 7 weeks ago 1.06GB 2025-08-29 21:25:18.097244 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 7 weeks ago 1.06GB 2025-08-29 21:25:18.097272 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 7 weeks ago 1.41GB 2025-08-29 21:25:18.097284 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 7 weeks ago 1.41GB 2025-08-29 21:25:18.097295 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 7 weeks ago 1.29GB 2025-08-29 21:25:18.097306 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 7 weeks ago 1.42GB 2025-08-29 21:25:18.097317 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 7 weeks ago 1.29GB 2025-08-29 21:25:18.097329 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 7 weeks ago 1.29GB 2025-08-29 21:25:18.097341 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 7 weeks ago 1.2GB 2025-08-29 21:25:18.097371 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 7 weeks ago 1.31GB 2025-08-29 21:25:18.097383 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 7 weeks ago 1.05GB 2025-08-29 21:25:18.097396 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 7 weeks ago 1.05GB 2025-08-29 21:25:18.097408 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 7 weeks ago 1.05GB 2025-08-29 21:25:18.097421 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 7 weeks ago 1.06GB 2025-08-29 21:25:18.097433 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 7 weeks ago 1.06GB 2025-08-29 21:25:18.097446 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 7 weeks ago 1.05GB 2025-08-29 21:25:18.097458 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 7 weeks ago 1.11GB 2025-08-29 21:25:18.097472 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 7 weeks ago 1.13GB 2025-08-29 21:25:18.097484 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 7 weeks ago 1.11GB 2025-08-29 21:25:18.097497 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 7 weeks ago 1.24GB 2025-08-29 21:25:18.097517 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 21:25:18.353089 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 21:25:18.353385 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 21:25:18.405363 | orchestrator | 2025-08-29 21:25:18.405421 | orchestrator | ## Containers @ testbed-node-2 2025-08-29 21:25:18.405433 | orchestrator | 2025-08-29 21:25:18.405445 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 21:25:18.405457 | orchestrator | + echo 2025-08-29 21:25:18.405468 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-08-29 21:25:18.405480 | orchestrator | + echo 2025-08-29 21:25:18.405491 | orchestrator | + osism container testbed-node-2 ps 2025-08-29 21:25:20.616534 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 21:25:20.616702 | orchestrator | 43e2f19376e7 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-08-29 21:25:20.616719 | orchestrator | e62102c0e428 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-08-29 21:25:20.616776 | orchestrator | d7baca73f5ea registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-08-29 21:25:20.616789 | orchestrator | ad1db5da212d registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-08-29 21:25:20.616800 | orchestrator | 0c54319e0373 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-08-29 21:25:20.616812 | orchestrator | 8340e8102e78 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-08-29 21:25:20.616823 | orchestrator | a5219981320b registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-08-29 21:25:20.616838 | orchestrator | cd337f17af8b registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-08-29 21:25:20.616858 | orchestrator | 12198fe1f96f registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-08-29 21:25:20.616880 | orchestrator | 2935eed178e1 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-08-29 21:25:20.616898 | orchestrator | c6fee5805011 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-08-29 21:25:20.616939 | orchestrator | db8e6246c717 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-08-29 21:25:20.616959 | orchestrator | b90d9e3d2a55 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-08-29 21:25:20.616976 | orchestrator | 1c131a8f2cdc registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-08-29 21:25:20.617016 | orchestrator | 4f7e2ff4e365 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-08-29 21:25:20.617034 | orchestrator | f4394008e64b registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-08-29 21:25:20.617055 | orchestrator | a041303c6788 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-08-29 21:25:20.617072 | orchestrator | 7b13d4ac1461 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-08-29 21:25:20.617091 | orchestrator | 047856c6c9dc registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-08-29 21:25:20.617135 | orchestrator | f26ab7a13001 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-08-29 21:25:20.617157 | orchestrator | 51f6ded7b1a4 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-08-29 21:25:20.617179 | orchestrator | 5cbb29503107 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-08-29 21:25:20.617198 | orchestrator | 749115c0b263 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-08-29 21:25:20.617211 | orchestrator | ed8b6ec51292 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-08-29 21:25:20.617223 | orchestrator | ec9c4a0e2a3f registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-08-29 21:25:20.617235 | orchestrator | 9d2758ea8864 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-08-29 21:25:20.617247 | orchestrator | e3c60444829d registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-08-29 21:25:20.617260 | orchestrator | 881baed2755e registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-08-29 21:25:20.617271 | orchestrator | b29899eeaae1 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-08-29 21:25:20.617283 | orchestrator | 09fcf9031027 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-08-29 21:25:20.617295 | orchestrator | b8623d97832a registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-08-29 21:25:20.617307 | orchestrator | 0bcb3617d164 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-08-29 21:25:20.617332 | orchestrator | 8713ac842743 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-08-29 21:25:20.617352 | orchestrator | 5f3f8ecd47a9 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-08-29 21:25:20.617370 | orchestrator | 606d08de44f2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-08-29 21:25:20.617389 | orchestrator | a8202fd0a165 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-08-29 21:25:20.617409 | orchestrator | 3387a44abdb7 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-08-29 21:25:20.617429 | orchestrator | 9c75f985419d registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-08-29 21:25:20.617448 | orchestrator | 921f923f603d registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-08-29 21:25:20.617467 | orchestrator | 49b06da5d3a0 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-08-29 21:25:20.617493 | orchestrator | 4727e56ae020 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-08-29 21:25:20.617513 | orchestrator | 40bce8fc97a4 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-08-29 21:25:20.617531 | orchestrator | 32ec19564515 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-08-29 21:25:20.617548 | orchestrator | f96ce5261eb1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-08-29 21:25:20.617624 | orchestrator | 00d0eb06504a registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-08-29 21:25:20.617644 | orchestrator | 61f25e305d34 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-08-29 21:25:20.617656 | orchestrator | eb2f082ed6e1 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-08-29 21:25:20.617667 | orchestrator | 2b5db318099d registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-08-29 21:25:20.617678 | orchestrator | faa1b926afec registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-08-29 21:25:20.617688 | orchestrator | e8473ab1eec3 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-08-29 21:25:20.617699 | orchestrator | 19baa83852b9 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-08-29 21:25:20.617719 | orchestrator | 761eb0eb84e1 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-08-29 21:25:20.878454 | orchestrator | 2025-08-29 21:25:20.878557 | orchestrator | ## Images @ testbed-node-2 2025-08-29 21:25:20.878630 | orchestrator | 2025-08-29 21:25:20.878643 | orchestrator | + echo 2025-08-29 21:25:20.878655 | orchestrator | + echo '## Images @ testbed-node-2' 2025-08-29 21:25:20.878667 | orchestrator | + echo 2025-08-29 21:25:20.878678 | orchestrator | + osism container testbed-node-2 images 2025-08-29 21:25:23.071921 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 21:25:23.072019 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 7 weeks ago 628MB 2025-08-29 21:25:23.072033 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 7 weeks ago 329MB 2025-08-29 21:25:23.072042 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 7 weeks ago 326MB 2025-08-29 21:25:23.072052 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 7 weeks ago 1.59GB 2025-08-29 21:25:23.072062 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 7 weeks ago 1.55GB 2025-08-29 21:25:23.072071 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 7 weeks ago 417MB 2025-08-29 21:25:23.072080 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 7 weeks ago 318MB 2025-08-29 21:25:23.072090 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 7 weeks ago 375MB 2025-08-29 21:25:23.072099 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 7 weeks ago 746MB 2025-08-29 21:25:23.072109 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 7 weeks ago 1.01GB 2025-08-29 21:25:23.072118 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 7 weeks ago 318MB 2025-08-29 21:25:23.072127 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 7 weeks ago 361MB 2025-08-29 21:25:23.072137 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 7 weeks ago 361MB 2025-08-29 21:25:23.072147 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 7 weeks ago 1.21GB 2025-08-29 21:25:23.072156 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 7 weeks ago 353MB 2025-08-29 21:25:23.072166 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 7 weeks ago 410MB 2025-08-29 21:25:23.072176 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 7 weeks ago 344MB 2025-08-29 21:25:23.072185 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 7 weeks ago 358MB 2025-08-29 21:25:23.072195 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 7 weeks ago 351MB 2025-08-29 21:25:23.072204 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 7 weeks ago 324MB 2025-08-29 21:25:23.072213 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 7 weeks ago 324MB 2025-08-29 21:25:23.072223 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 7 weeks ago 590MB 2025-08-29 21:25:23.072249 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 7 weeks ago 946MB 2025-08-29 21:25:23.072259 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 7 weeks ago 947MB 2025-08-29 21:25:23.072269 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 7 weeks ago 947MB 2025-08-29 21:25:23.072278 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 7 weeks ago 946MB 2025-08-29 21:25:23.072287 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 7 weeks ago 1.15GB 2025-08-29 21:25:23.072296 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 7 weeks ago 1.04GB 2025-08-29 21:25:23.072306 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 7 weeks ago 1.06GB 2025-08-29 21:25:23.072315 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 7 weeks ago 1.06GB 2025-08-29 21:25:23.072324 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 7 weeks ago 1.06GB 2025-08-29 21:25:23.072350 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 7 weeks ago 1.41GB 2025-08-29 21:25:23.072360 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 7 weeks ago 1.41GB 2025-08-29 21:25:23.072370 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 7 weeks ago 1.29GB 2025-08-29 21:25:23.072379 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 7 weeks ago 1.42GB 2025-08-29 21:25:23.072388 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 7 weeks ago 1.29GB 2025-08-29 21:25:23.072398 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 7 weeks ago 1.29GB 2025-08-29 21:25:23.072407 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 7 weeks ago 1.2GB 2025-08-29 21:25:23.072416 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 7 weeks ago 1.31GB 2025-08-29 21:25:23.072426 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 7 weeks ago 1.05GB 2025-08-29 21:25:23.072435 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 7 weeks ago 1.05GB 2025-08-29 21:25:23.072446 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 7 weeks ago 1.05GB 2025-08-29 21:25:23.072456 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 7 weeks ago 1.06GB 2025-08-29 21:25:23.072475 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 7 weeks ago 1.06GB 2025-08-29 21:25:23.072486 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 7 weeks ago 1.05GB 2025-08-29 21:25:23.072497 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 7 weeks ago 1.11GB 2025-08-29 21:25:23.072509 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 7 weeks ago 1.13GB 2025-08-29 21:25:23.072520 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 7 weeks ago 1.11GB 2025-08-29 21:25:23.072537 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 7 weeks ago 1.24GB 2025-08-29 21:25:23.072548 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 21:25:23.330271 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-08-29 21:25:23.338610 | orchestrator | + set -e 2025-08-29 21:25:23.338653 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 21:25:23.339536 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 21:25:23.339588 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 21:25:23.339599 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 21:25:23.339609 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 21:25:23.339620 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 21:25:23.339631 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 21:25:23.339641 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 21:25:23.339651 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 21:25:23.339660 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 21:25:23.339670 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 21:25:23.339680 | orchestrator | ++ export ARA=false 2025-08-29 21:25:23.339689 | orchestrator | ++ ARA=false 2025-08-29 21:25:23.339699 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 21:25:23.339709 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 21:25:23.339718 | orchestrator | ++ export TEMPEST=false 2025-08-29 21:25:23.339728 | orchestrator | ++ TEMPEST=false 2025-08-29 21:25:23.339737 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 21:25:23.339747 | orchestrator | ++ IS_ZUUL=true 2025-08-29 21:25:23.339756 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-08-29 21:25:23.339766 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-08-29 21:25:23.339775 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 21:25:23.339785 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 21:25:23.339794 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 21:25:23.339804 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 21:25:23.339813 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 21:25:23.339823 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 21:25:23.339832 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 21:25:23.339842 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 21:25:23.339851 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 21:25:23.339861 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-08-29 21:25:23.348296 | orchestrator | + set -e 2025-08-29 21:25:23.349311 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 21:25:23.349336 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 21:25:23.349347 | orchestrator | ++ INTERACTIVE=false 2025-08-29 21:25:23.349356 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 21:25:23.349366 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 21:25:23.349375 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 21:25:23.350142 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 21:25:23.356847 | orchestrator | 2025-08-29 21:25:23.356870 | orchestrator | # Ceph status 2025-08-29 21:25:23.356880 | orchestrator | 2025-08-29 21:25:23.356891 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 21:25:23.356905 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 21:25:23.356915 | orchestrator | + echo 2025-08-29 21:25:23.356926 | orchestrator | + echo '# Ceph status' 2025-08-29 21:25:23.356935 | orchestrator | + echo 2025-08-29 21:25:23.356945 | orchestrator | + ceph -s 2025-08-29 21:25:23.936128 | orchestrator | cluster: 2025-08-29 21:25:23.936228 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-08-29 21:25:23.936244 | orchestrator | health: HEALTH_OK 2025-08-29 21:25:23.936258 | orchestrator | 2025-08-29 21:25:23.936270 | orchestrator | services: 2025-08-29 21:25:23.936282 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-08-29 21:25:23.936306 | orchestrator | mgr: testbed-node-1(active, since 15m), standbys: testbed-node-0, testbed-node-2 2025-08-29 21:25:23.936319 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-08-29 21:25:23.936330 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 24m) 2025-08-29 21:25:23.936341 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-08-29 21:25:23.936352 | orchestrator | 2025-08-29 21:25:23.936364 | orchestrator | data: 2025-08-29 21:25:23.936375 | orchestrator | volumes: 1/1 healthy 2025-08-29 21:25:23.936398 | orchestrator | pools: 14 pools, 401 pgs 2025-08-29 21:25:23.936431 | orchestrator | objects: 524 objects, 2.2 GiB 2025-08-29 21:25:23.936443 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-08-29 21:25:23.936454 | orchestrator | pgs: 401 active+clean 2025-08-29 21:25:23.936465 | orchestrator | 2025-08-29 21:25:23.982321 | orchestrator | 2025-08-29 21:25:23.982352 | orchestrator | # Ceph versions 2025-08-29 21:25:23.982364 | orchestrator | + echo 2025-08-29 21:25:23.982375 | orchestrator | + echo '# Ceph versions' 2025-08-29 21:25:23.982386 | orchestrator | + echo 2025-08-29 21:25:23.982873 | orchestrator | 2025-08-29 21:25:23.982895 | orchestrator | + ceph versions 2025-08-29 21:25:24.531470 | orchestrator | { 2025-08-29 21:25:24.531562 | orchestrator | "mon": { 2025-08-29 21:25:24.531633 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 21:25:24.531645 | orchestrator | }, 2025-08-29 21:25:24.531655 | orchestrator | "mgr": { 2025-08-29 21:25:24.531665 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 21:25:24.531675 | orchestrator | }, 2025-08-29 21:25:24.531685 | orchestrator | "osd": { 2025-08-29 21:25:24.531694 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-08-29 21:25:24.531704 | orchestrator | }, 2025-08-29 21:25:24.531714 | orchestrator | "mds": { 2025-08-29 21:25:24.531724 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 21:25:24.531733 | orchestrator | }, 2025-08-29 21:25:24.531743 | orchestrator | "rgw": { 2025-08-29 21:25:24.531752 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 21:25:24.531762 | orchestrator | }, 2025-08-29 21:25:24.531772 | orchestrator | "overall": { 2025-08-29 21:25:24.531782 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-08-29 21:25:24.531792 | orchestrator | } 2025-08-29 21:25:24.531802 | orchestrator | } 2025-08-29 21:25:24.574152 | orchestrator | 2025-08-29 21:25:24.574182 | orchestrator | + echo 2025-08-29 21:25:24.574192 | orchestrator | + echo '# Ceph OSD tree' 2025-08-29 21:25:24.574896 | orchestrator | # Ceph OSD tree 2025-08-29 21:25:24.574917 | orchestrator | 2025-08-29 21:25:24.574928 | orchestrator | + echo 2025-08-29 21:25:24.574939 | orchestrator | + ceph osd df tree 2025-08-29 21:25:25.090852 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-08-29 21:25:25.090940 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-08-29 21:25:25.090952 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-08-29 21:25:25.090961 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.28 1.06 175 up osd.0 2025-08-29 21:25:25.090971 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.55 0.94 213 up osd.3 2025-08-29 21:25:25.090981 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-08-29 21:25:25.090991 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.81 1.15 209 up osd.1 2025-08-29 21:25:25.091000 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 955 MiB 1 KiB 74 MiB 19 GiB 5.03 0.85 181 up osd.5 2025-08-29 21:25:25.091010 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-08-29 21:25:25.091019 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 74 MiB 19 GiB 7.18 1.21 203 up osd.2 2025-08-29 21:25:25.091029 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 952 MiB 883 MiB 1 KiB 70 MiB 19 GiB 4.65 0.79 189 up osd.4 2025-08-29 21:25:25.091038 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-08-29 21:25:25.091048 | orchestrator | MIN/MAX VAR: 0.79/1.21 STDDEV: 0.92 2025-08-29 21:25:25.134329 | orchestrator | 2025-08-29 21:25:25.134445 | orchestrator | # Ceph monitor status 2025-08-29 21:25:25.134461 | orchestrator | 2025-08-29 21:25:25.134473 | orchestrator | + echo 2025-08-29 21:25:25.134485 | orchestrator | + echo '# Ceph monitor status' 2025-08-29 21:25:25.134497 | orchestrator | + echo 2025-08-29 21:25:25.134508 | orchestrator | + ceph mon stat 2025-08-29 21:25:25.681444 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-08-29 21:25:25.729280 | orchestrator | 2025-08-29 21:25:25.729361 | orchestrator | # Ceph quorum status 2025-08-29 21:25:25.729376 | orchestrator | 2025-08-29 21:25:25.729388 | orchestrator | + echo 2025-08-29 21:25:25.729400 | orchestrator | + echo '# Ceph quorum status' 2025-08-29 21:25:25.729411 | orchestrator | + echo 2025-08-29 21:25:25.730185 | orchestrator | + ceph quorum_status 2025-08-29 21:25:25.730211 | orchestrator | + jq 2025-08-29 21:25:26.368420 | orchestrator | { 2025-08-29 21:25:26.368636 | orchestrator | "election_epoch": 6, 2025-08-29 21:25:26.368657 | orchestrator | "quorum": [ 2025-08-29 21:25:26.368670 | orchestrator | 0, 2025-08-29 21:25:26.368681 | orchestrator | 1, 2025-08-29 21:25:26.368691 | orchestrator | 2 2025-08-29 21:25:26.368702 | orchestrator | ], 2025-08-29 21:25:26.368713 | orchestrator | "quorum_names": [ 2025-08-29 21:25:26.368724 | orchestrator | "testbed-node-0", 2025-08-29 21:25:26.368734 | orchestrator | "testbed-node-1", 2025-08-29 21:25:26.368745 | orchestrator | "testbed-node-2" 2025-08-29 21:25:26.368756 | orchestrator | ], 2025-08-29 21:25:26.368767 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-08-29 21:25:26.368779 | orchestrator | "quorum_age": 1670, 2025-08-29 21:25:26.368790 | orchestrator | "features": { 2025-08-29 21:25:26.368800 | orchestrator | "quorum_con": "4540138322906710015", 2025-08-29 21:25:26.368811 | orchestrator | "quorum_mon": [ 2025-08-29 21:25:26.368822 | orchestrator | "kraken", 2025-08-29 21:25:26.368832 | orchestrator | "luminous", 2025-08-29 21:25:26.368843 | orchestrator | "mimic", 2025-08-29 21:25:26.368853 | orchestrator | "osdmap-prune", 2025-08-29 21:25:26.368864 | orchestrator | "nautilus", 2025-08-29 21:25:26.368875 | orchestrator | "octopus", 2025-08-29 21:25:26.368885 | orchestrator | "pacific", 2025-08-29 21:25:26.368895 | orchestrator | "elector-pinging", 2025-08-29 21:25:26.368906 | orchestrator | "quincy", 2025-08-29 21:25:26.368917 | orchestrator | "reef" 2025-08-29 21:25:26.368928 | orchestrator | ] 2025-08-29 21:25:26.368938 | orchestrator | }, 2025-08-29 21:25:26.368949 | orchestrator | "monmap": { 2025-08-29 21:25:26.368959 | orchestrator | "epoch": 1, 2025-08-29 21:25:26.368970 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-08-29 21:25:26.368982 | orchestrator | "modified": "2025-08-29T20:57:12.689210Z", 2025-08-29 21:25:26.368992 | orchestrator | "created": "2025-08-29T20:57:12.689210Z", 2025-08-29 21:25:26.369003 | orchestrator | "min_mon_release": 18, 2025-08-29 21:25:26.369013 | orchestrator | "min_mon_release_name": "reef", 2025-08-29 21:25:26.369024 | orchestrator | "election_strategy": 1, 2025-08-29 21:25:26.369035 | orchestrator | "disallowed_leaders: ": "", 2025-08-29 21:25:26.369045 | orchestrator | "stretch_mode": false, 2025-08-29 21:25:26.369056 | orchestrator | "tiebreaker_mon": "", 2025-08-29 21:25:26.369066 | orchestrator | "removed_ranks: ": "", 2025-08-29 21:25:26.369077 | orchestrator | "features": { 2025-08-29 21:25:26.369087 | orchestrator | "persistent": [ 2025-08-29 21:25:26.369098 | orchestrator | "kraken", 2025-08-29 21:25:26.369108 | orchestrator | "luminous", 2025-08-29 21:25:26.369119 | orchestrator | "mimic", 2025-08-29 21:25:26.369130 | orchestrator | "osdmap-prune", 2025-08-29 21:25:26.369140 | orchestrator | "nautilus", 2025-08-29 21:25:26.369150 | orchestrator | "octopus", 2025-08-29 21:25:26.369161 | orchestrator | "pacific", 2025-08-29 21:25:26.369171 | orchestrator | "elector-pinging", 2025-08-29 21:25:26.369182 | orchestrator | "quincy", 2025-08-29 21:25:26.369193 | orchestrator | "reef" 2025-08-29 21:25:26.369203 | orchestrator | ], 2025-08-29 21:25:26.369214 | orchestrator | "optional": [] 2025-08-29 21:25:26.369225 | orchestrator | }, 2025-08-29 21:25:26.369235 | orchestrator | "mons": [ 2025-08-29 21:25:26.369246 | orchestrator | { 2025-08-29 21:25:26.369256 | orchestrator | "rank": 0, 2025-08-29 21:25:26.369267 | orchestrator | "name": "testbed-node-0", 2025-08-29 21:25:26.369277 | orchestrator | "public_addrs": { 2025-08-29 21:25:26.369288 | orchestrator | "addrvec": [ 2025-08-29 21:25:26.369299 | orchestrator | { 2025-08-29 21:25:26.369309 | orchestrator | "type": "v2", 2025-08-29 21:25:26.369343 | orchestrator | "addr": "192.168.16.10:3300", 2025-08-29 21:25:26.369354 | orchestrator | "nonce": 0 2025-08-29 21:25:26.369365 | orchestrator | }, 2025-08-29 21:25:26.369376 | orchestrator | { 2025-08-29 21:25:26.369386 | orchestrator | "type": "v1", 2025-08-29 21:25:26.369397 | orchestrator | "addr": "192.168.16.10:6789", 2025-08-29 21:25:26.369408 | orchestrator | "nonce": 0 2025-08-29 21:25:26.369418 | orchestrator | } 2025-08-29 21:25:26.369429 | orchestrator | ] 2025-08-29 21:25:26.369439 | orchestrator | }, 2025-08-29 21:25:26.369450 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-08-29 21:25:26.369461 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-08-29 21:25:26.369471 | orchestrator | "priority": 0, 2025-08-29 21:25:26.369482 | orchestrator | "weight": 0, 2025-08-29 21:25:26.369492 | orchestrator | "crush_location": "{}" 2025-08-29 21:25:26.369503 | orchestrator | }, 2025-08-29 21:25:26.369513 | orchestrator | { 2025-08-29 21:25:26.369524 | orchestrator | "rank": 1, 2025-08-29 21:25:26.369534 | orchestrator | "name": "testbed-node-1", 2025-08-29 21:25:26.369545 | orchestrator | "public_addrs": { 2025-08-29 21:25:26.369555 | orchestrator | "addrvec": [ 2025-08-29 21:25:26.369585 | orchestrator | { 2025-08-29 21:25:26.369597 | orchestrator | "type": "v2", 2025-08-29 21:25:26.369607 | orchestrator | "addr": "192.168.16.11:3300", 2025-08-29 21:25:26.369618 | orchestrator | "nonce": 0 2025-08-29 21:25:26.369629 | orchestrator | }, 2025-08-29 21:25:26.369640 | orchestrator | { 2025-08-29 21:25:26.369651 | orchestrator | "type": "v1", 2025-08-29 21:25:26.369662 | orchestrator | "addr": "192.168.16.11:6789", 2025-08-29 21:25:26.369672 | orchestrator | "nonce": 0 2025-08-29 21:25:26.369683 | orchestrator | } 2025-08-29 21:25:26.369694 | orchestrator | ] 2025-08-29 21:25:26.369705 | orchestrator | }, 2025-08-29 21:25:26.369716 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-08-29 21:25:26.369726 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-08-29 21:25:26.369737 | orchestrator | "priority": 0, 2025-08-29 21:25:26.369748 | orchestrator | "weight": 0, 2025-08-29 21:25:26.369758 | orchestrator | "crush_location": "{}" 2025-08-29 21:25:26.369769 | orchestrator | }, 2025-08-29 21:25:26.369780 | orchestrator | { 2025-08-29 21:25:26.369807 | orchestrator | "rank": 2, 2025-08-29 21:25:26.369818 | orchestrator | "name": "testbed-node-2", 2025-08-29 21:25:26.369829 | orchestrator | "public_addrs": { 2025-08-29 21:25:26.369840 | orchestrator | "addrvec": [ 2025-08-29 21:25:26.369851 | orchestrator | { 2025-08-29 21:25:26.369861 | orchestrator | "type": "v2", 2025-08-29 21:25:26.369872 | orchestrator | "addr": "192.168.16.12:3300", 2025-08-29 21:25:26.369883 | orchestrator | "nonce": 0 2025-08-29 21:25:26.369894 | orchestrator | }, 2025-08-29 21:25:26.369904 | orchestrator | { 2025-08-29 21:25:26.369915 | orchestrator | "type": "v1", 2025-08-29 21:25:26.369926 | orchestrator | "addr": "192.168.16.12:6789", 2025-08-29 21:25:26.369936 | orchestrator | "nonce": 0 2025-08-29 21:25:26.369947 | orchestrator | } 2025-08-29 21:25:26.369957 | orchestrator | ] 2025-08-29 21:25:26.369968 | orchestrator | }, 2025-08-29 21:25:26.369979 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-08-29 21:25:26.369990 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-08-29 21:25:26.370000 | orchestrator | "priority": 0, 2025-08-29 21:25:26.370011 | orchestrator | "weight": 0, 2025-08-29 21:25:26.370074 | orchestrator | "crush_location": "{}" 2025-08-29 21:25:26.370085 | orchestrator | } 2025-08-29 21:25:26.370097 | orchestrator | ] 2025-08-29 21:25:26.370107 | orchestrator | } 2025-08-29 21:25:26.370118 | orchestrator | } 2025-08-29 21:25:26.370477 | orchestrator | 2025-08-29 21:25:26.370609 | orchestrator | # Ceph free space status 2025-08-29 21:25:26.370628 | orchestrator | 2025-08-29 21:25:26.370641 | orchestrator | + echo 2025-08-29 21:25:26.370653 | orchestrator | + echo '# Ceph free space status' 2025-08-29 21:25:26.370664 | orchestrator | + echo 2025-08-29 21:25:26.370675 | orchestrator | + ceph df 2025-08-29 21:25:26.930532 | orchestrator | --- RAW STORAGE --- 2025-08-29 21:25:26.930682 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-08-29 21:25:26.930712 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-08-29 21:25:26.930724 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-08-29 21:25:26.930735 | orchestrator | 2025-08-29 21:25:26.930748 | orchestrator | --- POOLS --- 2025-08-29 21:25:26.930786 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-08-29 21:25:26.930798 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-08-29 21:25:26.930809 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-08-29 21:25:26.930820 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-08-29 21:25:26.930830 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-08-29 21:25:26.930841 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-08-29 21:25:26.930852 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-08-29 21:25:26.930862 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-08-29 21:25:26.930873 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-08-29 21:25:26.930883 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-08-29 21:25:26.930894 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 21:25:26.930904 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 21:25:26.930915 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2025-08-29 21:25:26.930925 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 21:25:26.930935 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 21:25:26.974728 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 21:25:27.029134 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 21:25:27.029194 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-08-29 21:25:27.029208 | orchestrator | + osism apply facts 2025-08-29 21:25:39.019856 | orchestrator | 2025-08-29 21:25:39 | INFO  | Task da60e2e1-d97f-4565-ab60-4979314690ff (facts) was prepared for execution. 2025-08-29 21:25:39.019990 | orchestrator | 2025-08-29 21:25:39 | INFO  | It takes a moment until task da60e2e1-d97f-4565-ab60-4979314690ff (facts) has been started and output is visible here. 2025-08-29 21:25:51.973058 | orchestrator | 2025-08-29 21:25:51.973173 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 21:25:51.973189 | orchestrator | 2025-08-29 21:25:51.973201 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 21:25:51.973213 | orchestrator | Friday 29 August 2025 21:25:42 +0000 (0:00:00.198) 0:00:00.198 ********* 2025-08-29 21:25:51.973225 | orchestrator | ok: [testbed-manager] 2025-08-29 21:25:51.973237 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:25:51.973248 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:25:51.973277 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:25:51.973289 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:25:51.973299 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:25:51.973310 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:25:51.973321 | orchestrator | 2025-08-29 21:25:51.973332 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 21:25:51.973344 | orchestrator | Friday 29 August 2025 21:25:43 +0000 (0:00:01.297) 0:00:01.495 ********* 2025-08-29 21:25:51.973355 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:25:51.973367 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:25:51.973378 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:25:51.973389 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:25:51.973399 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:25:51.973410 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:25:51.973421 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:25:51.973431 | orchestrator | 2025-08-29 21:25:51.973442 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 21:25:51.973483 | orchestrator | 2025-08-29 21:25:51.973494 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 21:25:51.973505 | orchestrator | Friday 29 August 2025 21:25:44 +0000 (0:00:01.116) 0:00:02.612 ********* 2025-08-29 21:25:51.973541 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:25:51.973553 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:25:51.973563 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:25:51.973574 | orchestrator | ok: [testbed-manager] 2025-08-29 21:25:51.973618 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:25:51.973631 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:25:51.973644 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:25:51.973655 | orchestrator | 2025-08-29 21:25:51.973667 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 21:25:51.973680 | orchestrator | 2025-08-29 21:25:51.973692 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 21:25:51.973704 | orchestrator | Friday 29 August 2025 21:25:51 +0000 (0:00:06.064) 0:00:08.676 ********* 2025-08-29 21:25:51.973716 | orchestrator | skipping: [testbed-manager] 2025-08-29 21:25:51.973728 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:25:51.973740 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:25:51.973752 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:25:51.973764 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:25:51.973776 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:25:51.973788 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:25:51.973800 | orchestrator | 2025-08-29 21:25:51.973812 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:25:51.973825 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:25:51.973837 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:25:51.973850 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:25:51.973862 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:25:51.973874 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:25:51.973886 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:25:51.973899 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:25:51.973911 | orchestrator | 2025-08-29 21:25:51.973923 | orchestrator | 2025-08-29 21:25:51.973936 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:25:51.973949 | orchestrator | Friday 29 August 2025 21:25:51 +0000 (0:00:00.570) 0:00:09.247 ********* 2025-08-29 21:25:51.973960 | orchestrator | =============================================================================== 2025-08-29 21:25:51.973971 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.06s 2025-08-29 21:25:51.973981 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.30s 2025-08-29 21:25:51.973992 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.12s 2025-08-29 21:25:51.974003 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-08-29 21:25:52.246314 | orchestrator | + osism validate ceph-mons 2025-08-29 21:26:23.499943 | orchestrator | 2025-08-29 21:26:23.500057 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-08-29 21:26:23.500073 | orchestrator | 2025-08-29 21:26:23.500085 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 21:26:23.500097 | orchestrator | Friday 29 August 2025 21:26:08 +0000 (0:00:00.431) 0:00:00.431 ********* 2025-08-29 21:26:23.500109 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:23.500144 | orchestrator | 2025-08-29 21:26:23.500155 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 21:26:23.500166 | orchestrator | Friday 29 August 2025 21:26:09 +0000 (0:00:00.632) 0:00:01.064 ********* 2025-08-29 21:26:23.500178 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:23.500188 | orchestrator | 2025-08-29 21:26:23.500200 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 21:26:23.500210 | orchestrator | Friday 29 August 2025 21:26:09 +0000 (0:00:00.832) 0:00:01.896 ********* 2025-08-29 21:26:23.500221 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.500233 | orchestrator | 2025-08-29 21:26:23.500244 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-08-29 21:26:23.500255 | orchestrator | Friday 29 August 2025 21:26:10 +0000 (0:00:00.238) 0:00:02.134 ********* 2025-08-29 21:26:23.500266 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.500277 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:26:23.500288 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:26:23.500299 | orchestrator | 2025-08-29 21:26:23.500310 | orchestrator | TASK [Get container info] ****************************************************** 2025-08-29 21:26:23.500321 | orchestrator | Friday 29 August 2025 21:26:10 +0000 (0:00:00.278) 0:00:02.413 ********* 2025-08-29 21:26:23.500332 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:26:23.500344 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:26:23.500355 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.500365 | orchestrator | 2025-08-29 21:26:23.500376 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-08-29 21:26:23.500387 | orchestrator | Friday 29 August 2025 21:26:11 +0000 (0:00:00.970) 0:00:03.383 ********* 2025-08-29 21:26:23.500398 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.500409 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:26:23.500420 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:26:23.500430 | orchestrator | 2025-08-29 21:26:23.500441 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-08-29 21:26:23.500452 | orchestrator | Friday 29 August 2025 21:26:11 +0000 (0:00:00.285) 0:00:03.668 ********* 2025-08-29 21:26:23.500465 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.500476 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:26:23.500489 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:26:23.500501 | orchestrator | 2025-08-29 21:26:23.500513 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 21:26:23.500525 | orchestrator | Friday 29 August 2025 21:26:12 +0000 (0:00:00.451) 0:00:04.119 ********* 2025-08-29 21:26:23.500538 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.500549 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:26:23.500561 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:26:23.500573 | orchestrator | 2025-08-29 21:26:23.500585 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-08-29 21:26:23.500630 | orchestrator | Friday 29 August 2025 21:26:12 +0000 (0:00:00.305) 0:00:04.425 ********* 2025-08-29 21:26:23.500642 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.500655 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:26:23.500668 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:26:23.500680 | orchestrator | 2025-08-29 21:26:23.500692 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-08-29 21:26:23.500704 | orchestrator | Friday 29 August 2025 21:26:12 +0000 (0:00:00.271) 0:00:04.696 ********* 2025-08-29 21:26:23.500716 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.500728 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:26:23.500740 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:26:23.500752 | orchestrator | 2025-08-29 21:26:23.500764 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 21:26:23.500776 | orchestrator | Friday 29 August 2025 21:26:12 +0000 (0:00:00.269) 0:00:04.966 ********* 2025-08-29 21:26:23.500789 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.500802 | orchestrator | 2025-08-29 21:26:23.500821 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 21:26:23.500851 | orchestrator | Friday 29 August 2025 21:26:13 +0000 (0:00:00.237) 0:00:05.203 ********* 2025-08-29 21:26:23.500898 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.500919 | orchestrator | 2025-08-29 21:26:23.500936 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 21:26:23.500947 | orchestrator | Friday 29 August 2025 21:26:13 +0000 (0:00:00.594) 0:00:05.797 ********* 2025-08-29 21:26:23.500958 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.500969 | orchestrator | 2025-08-29 21:26:23.500979 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:23.500990 | orchestrator | Friday 29 August 2025 21:26:13 +0000 (0:00:00.244) 0:00:06.042 ********* 2025-08-29 21:26:23.501001 | orchestrator | 2025-08-29 21:26:23.501011 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:23.501022 | orchestrator | Friday 29 August 2025 21:26:14 +0000 (0:00:00.068) 0:00:06.110 ********* 2025-08-29 21:26:23.501033 | orchestrator | 2025-08-29 21:26:23.501043 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:23.501055 | orchestrator | Friday 29 August 2025 21:26:14 +0000 (0:00:00.066) 0:00:06.177 ********* 2025-08-29 21:26:23.501065 | orchestrator | 2025-08-29 21:26:23.501076 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 21:26:23.501086 | orchestrator | Friday 29 August 2025 21:26:14 +0000 (0:00:00.089) 0:00:06.266 ********* 2025-08-29 21:26:23.501097 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.501108 | orchestrator | 2025-08-29 21:26:23.501118 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-08-29 21:26:23.501129 | orchestrator | Friday 29 August 2025 21:26:14 +0000 (0:00:00.246) 0:00:06.513 ********* 2025-08-29 21:26:23.501140 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.501151 | orchestrator | 2025-08-29 21:26:23.501179 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-08-29 21:26:23.501191 | orchestrator | Friday 29 August 2025 21:26:14 +0000 (0:00:00.237) 0:00:06.750 ********* 2025-08-29 21:26:23.501201 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.501212 | orchestrator | 2025-08-29 21:26:23.501223 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-08-29 21:26:23.501234 | orchestrator | Friday 29 August 2025 21:26:14 +0000 (0:00:00.119) 0:00:06.870 ********* 2025-08-29 21:26:23.501245 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:26:23.501255 | orchestrator | 2025-08-29 21:26:23.501266 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-08-29 21:26:23.501277 | orchestrator | Friday 29 August 2025 21:26:16 +0000 (0:00:01.591) 0:00:08.461 ********* 2025-08-29 21:26:23.501287 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.501298 | orchestrator | 2025-08-29 21:26:23.501308 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-08-29 21:26:23.501319 | orchestrator | Friday 29 August 2025 21:26:16 +0000 (0:00:00.295) 0:00:08.757 ********* 2025-08-29 21:26:23.501330 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.501341 | orchestrator | 2025-08-29 21:26:23.501357 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-08-29 21:26:23.501368 | orchestrator | Friday 29 August 2025 21:26:16 +0000 (0:00:00.141) 0:00:08.898 ********* 2025-08-29 21:26:23.501378 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.501389 | orchestrator | 2025-08-29 21:26:23.501399 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-08-29 21:26:23.501410 | orchestrator | Friday 29 August 2025 21:26:17 +0000 (0:00:00.460) 0:00:09.358 ********* 2025-08-29 21:26:23.501421 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.501432 | orchestrator | 2025-08-29 21:26:23.501443 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-08-29 21:26:23.501453 | orchestrator | Friday 29 August 2025 21:26:17 +0000 (0:00:00.308) 0:00:09.667 ********* 2025-08-29 21:26:23.501473 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.501483 | orchestrator | 2025-08-29 21:26:23.501494 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-08-29 21:26:23.501505 | orchestrator | Friday 29 August 2025 21:26:17 +0000 (0:00:00.114) 0:00:09.782 ********* 2025-08-29 21:26:23.501515 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.501526 | orchestrator | 2025-08-29 21:26:23.501537 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-08-29 21:26:23.501547 | orchestrator | Friday 29 August 2025 21:26:17 +0000 (0:00:00.140) 0:00:09.922 ********* 2025-08-29 21:26:23.501558 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.501568 | orchestrator | 2025-08-29 21:26:23.501579 | orchestrator | TASK [Gather status data] ****************************************************** 2025-08-29 21:26:23.501611 | orchestrator | Friday 29 August 2025 21:26:17 +0000 (0:00:00.115) 0:00:10.038 ********* 2025-08-29 21:26:23.501624 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:26:23.501635 | orchestrator | 2025-08-29 21:26:23.501645 | orchestrator | TASK [Set health test data] **************************************************** 2025-08-29 21:26:23.501656 | orchestrator | Friday 29 August 2025 21:26:19 +0000 (0:00:01.314) 0:00:11.352 ********* 2025-08-29 21:26:23.501666 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.501677 | orchestrator | 2025-08-29 21:26:23.501688 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-08-29 21:26:23.501698 | orchestrator | Friday 29 August 2025 21:26:19 +0000 (0:00:00.295) 0:00:11.648 ********* 2025-08-29 21:26:23.501709 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.501720 | orchestrator | 2025-08-29 21:26:23.501730 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-08-29 21:26:23.501741 | orchestrator | Friday 29 August 2025 21:26:19 +0000 (0:00:00.141) 0:00:11.789 ********* 2025-08-29 21:26:23.501752 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:23.501762 | orchestrator | 2025-08-29 21:26:23.501773 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-08-29 21:26:23.501784 | orchestrator | Friday 29 August 2025 21:26:19 +0000 (0:00:00.137) 0:00:11.927 ********* 2025-08-29 21:26:23.501794 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.501809 | orchestrator | 2025-08-29 21:26:23.501828 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-08-29 21:26:23.501845 | orchestrator | Friday 29 August 2025 21:26:20 +0000 (0:00:00.132) 0:00:12.059 ********* 2025-08-29 21:26:23.501863 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.501883 | orchestrator | 2025-08-29 21:26:23.501894 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 21:26:23.501905 | orchestrator | Friday 29 August 2025 21:26:20 +0000 (0:00:00.129) 0:00:12.189 ********* 2025-08-29 21:26:23.501915 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:23.501926 | orchestrator | 2025-08-29 21:26:23.501937 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 21:26:23.501947 | orchestrator | Friday 29 August 2025 21:26:20 +0000 (0:00:00.494) 0:00:12.683 ********* 2025-08-29 21:26:23.501958 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:23.501969 | orchestrator | 2025-08-29 21:26:23.501979 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 21:26:23.501990 | orchestrator | Friday 29 August 2025 21:26:21 +0000 (0:00:00.618) 0:00:13.302 ********* 2025-08-29 21:26:23.502001 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:23.502011 | orchestrator | 2025-08-29 21:26:23.502072 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 21:26:23.502084 | orchestrator | Friday 29 August 2025 21:26:22 +0000 (0:00:01.535) 0:00:14.837 ********* 2025-08-29 21:26:23.502095 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:23.502105 | orchestrator | 2025-08-29 21:26:23.502116 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 21:26:23.502127 | orchestrator | Friday 29 August 2025 21:26:23 +0000 (0:00:00.252) 0:00:15.090 ********* 2025-08-29 21:26:23.502146 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:23.502156 | orchestrator | 2025-08-29 21:26:23.502175 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:25.533087 | orchestrator | Friday 29 August 2025 21:26:23 +0000 (0:00:00.235) 0:00:15.325 ********* 2025-08-29 21:26:25.533181 | orchestrator | 2025-08-29 21:26:25.533198 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:25.533210 | orchestrator | Friday 29 August 2025 21:26:23 +0000 (0:00:00.064) 0:00:15.390 ********* 2025-08-29 21:26:25.533221 | orchestrator | 2025-08-29 21:26:25.533233 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:25.533243 | orchestrator | Friday 29 August 2025 21:26:23 +0000 (0:00:00.068) 0:00:15.459 ********* 2025-08-29 21:26:25.533254 | orchestrator | 2025-08-29 21:26:25.533265 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 21:26:25.533275 | orchestrator | Friday 29 August 2025 21:26:23 +0000 (0:00:00.072) 0:00:15.531 ********* 2025-08-29 21:26:25.533287 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:25.533297 | orchestrator | 2025-08-29 21:26:25.533308 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 21:26:25.533319 | orchestrator | Friday 29 August 2025 21:26:24 +0000 (0:00:01.267) 0:00:16.798 ********* 2025-08-29 21:26:25.533329 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-08-29 21:26:25.533340 | orchestrator |  "msg": [ 2025-08-29 21:26:25.533352 | orchestrator |  "Validator run completed.", 2025-08-29 21:26:25.533364 | orchestrator |  "You can find the report file here:", 2025-08-29 21:26:25.533375 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-08-29T21:26:08+00:00-report.json", 2025-08-29 21:26:25.533386 | orchestrator |  "on the following host:", 2025-08-29 21:26:25.533397 | orchestrator |  "testbed-manager" 2025-08-29 21:26:25.533407 | orchestrator |  ] 2025-08-29 21:26:25.533419 | orchestrator | } 2025-08-29 21:26:25.533430 | orchestrator | 2025-08-29 21:26:25.533441 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:26:25.533454 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 21:26:25.533466 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:26:25.533478 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:26:25.533489 | orchestrator | 2025-08-29 21:26:25.533500 | orchestrator | 2025-08-29 21:26:25.533516 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:26:25.533527 | orchestrator | Friday 29 August 2025 21:26:25 +0000 (0:00:00.384) 0:00:17.182 ********* 2025-08-29 21:26:25.533537 | orchestrator | =============================================================================== 2025-08-29 21:26:25.533548 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.59s 2025-08-29 21:26:25.533558 | orchestrator | Aggregate test results step one ----------------------------------------- 1.54s 2025-08-29 21:26:25.533569 | orchestrator | Gather status data ------------------------------------------------------ 1.31s 2025-08-29 21:26:25.533579 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2025-08-29 21:26:25.533628 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-08-29 21:26:25.533641 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2025-08-29 21:26:25.533653 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-08-29 21:26:25.533665 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.62s 2025-08-29 21:26:25.533706 | orchestrator | Aggregate test results step two ----------------------------------------- 0.59s 2025-08-29 21:26:25.533719 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.49s 2025-08-29 21:26:25.533731 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.46s 2025-08-29 21:26:25.533744 | orchestrator | Set test result to passed if container is existing ---------------------- 0.45s 2025-08-29 21:26:25.533756 | orchestrator | Print report file information ------------------------------------------- 0.38s 2025-08-29 21:26:25.533767 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2025-08-29 21:26:25.533780 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-08-29 21:26:25.533792 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2025-08-29 21:26:25.533804 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-08-29 21:26:25.533817 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-08-29 21:26:25.533829 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2025-08-29 21:26:25.533841 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.27s 2025-08-29 21:26:25.893684 | orchestrator | + osism validate ceph-mgrs 2025-08-29 21:26:56.304353 | orchestrator | 2025-08-29 21:26:56.304477 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-08-29 21:26:56.304501 | orchestrator | 2025-08-29 21:26:56.304519 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 21:26:56.304536 | orchestrator | Friday 29 August 2025 21:26:42 +0000 (0:00:00.419) 0:00:00.419 ********* 2025-08-29 21:26:56.304554 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:56.304571 | orchestrator | 2025-08-29 21:26:56.304588 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 21:26:56.304668 | orchestrator | Friday 29 August 2025 21:26:42 +0000 (0:00:00.652) 0:00:01.071 ********* 2025-08-29 21:26:56.304686 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:56.304704 | orchestrator | 2025-08-29 21:26:56.304721 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 21:26:56.304738 | orchestrator | Friday 29 August 2025 21:26:43 +0000 (0:00:00.869) 0:00:01.941 ********* 2025-08-29 21:26:56.304755 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:56.304773 | orchestrator | 2025-08-29 21:26:56.304790 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-08-29 21:26:56.304807 | orchestrator | Friday 29 August 2025 21:26:43 +0000 (0:00:00.235) 0:00:02.177 ********* 2025-08-29 21:26:56.304823 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:56.304839 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:26:56.304856 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:26:56.304874 | orchestrator | 2025-08-29 21:26:56.304892 | orchestrator | TASK [Get container info] ****************************************************** 2025-08-29 21:26:56.304911 | orchestrator | Friday 29 August 2025 21:26:44 +0000 (0:00:00.272) 0:00:02.449 ********* 2025-08-29 21:26:56.304930 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:56.304948 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:26:56.304990 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:26:56.305009 | orchestrator | 2025-08-29 21:26:56.305027 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-08-29 21:26:56.305046 | orchestrator | Friday 29 August 2025 21:26:45 +0000 (0:00:00.996) 0:00:03.445 ********* 2025-08-29 21:26:56.305063 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:56.305080 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:26:56.305096 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:26:56.305113 | orchestrator | 2025-08-29 21:26:56.305132 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-08-29 21:26:56.305150 | orchestrator | Friday 29 August 2025 21:26:45 +0000 (0:00:00.275) 0:00:03.721 ********* 2025-08-29 21:26:56.305198 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:56.305217 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:26:56.305234 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:26:56.305250 | orchestrator | 2025-08-29 21:26:56.305267 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 21:26:56.305284 | orchestrator | Friday 29 August 2025 21:26:45 +0000 (0:00:00.456) 0:00:04.177 ********* 2025-08-29 21:26:56.305300 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:56.305317 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:26:56.305333 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:26:56.305349 | orchestrator | 2025-08-29 21:26:56.305366 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-08-29 21:26:56.305383 | orchestrator | Friday 29 August 2025 21:26:46 +0000 (0:00:00.327) 0:00:04.505 ********* 2025-08-29 21:26:56.305399 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:56.305416 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:26:56.305431 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:26:56.305447 | orchestrator | 2025-08-29 21:26:56.305464 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-08-29 21:26:56.305480 | orchestrator | Friday 29 August 2025 21:26:46 +0000 (0:00:00.280) 0:00:04.785 ********* 2025-08-29 21:26:56.305497 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:56.305513 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:26:56.305529 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:26:56.305545 | orchestrator | 2025-08-29 21:26:56.305560 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 21:26:56.305585 | orchestrator | Friday 29 August 2025 21:26:46 +0000 (0:00:00.268) 0:00:05.054 ********* 2025-08-29 21:26:56.305634 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:56.305654 | orchestrator | 2025-08-29 21:26:56.305671 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 21:26:56.305687 | orchestrator | Friday 29 August 2025 21:26:46 +0000 (0:00:00.222) 0:00:05.276 ********* 2025-08-29 21:26:56.305704 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:56.305720 | orchestrator | 2025-08-29 21:26:56.305737 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 21:26:56.305753 | orchestrator | Friday 29 August 2025 21:26:47 +0000 (0:00:00.688) 0:00:05.965 ********* 2025-08-29 21:26:56.305769 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:56.305786 | orchestrator | 2025-08-29 21:26:56.305802 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:56.305819 | orchestrator | Friday 29 August 2025 21:26:47 +0000 (0:00:00.267) 0:00:06.232 ********* 2025-08-29 21:26:56.305835 | orchestrator | 2025-08-29 21:26:56.305851 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:56.305868 | orchestrator | Friday 29 August 2025 21:26:47 +0000 (0:00:00.080) 0:00:06.312 ********* 2025-08-29 21:26:56.305884 | orchestrator | 2025-08-29 21:26:56.305901 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:56.305917 | orchestrator | Friday 29 August 2025 21:26:48 +0000 (0:00:00.067) 0:00:06.380 ********* 2025-08-29 21:26:56.305934 | orchestrator | 2025-08-29 21:26:56.305950 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 21:26:56.305966 | orchestrator | Friday 29 August 2025 21:26:48 +0000 (0:00:00.070) 0:00:06.451 ********* 2025-08-29 21:26:56.305982 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:56.305998 | orchestrator | 2025-08-29 21:26:56.306075 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-08-29 21:26:56.306096 | orchestrator | Friday 29 August 2025 21:26:48 +0000 (0:00:00.239) 0:00:06.691 ********* 2025-08-29 21:26:56.306114 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:56.306132 | orchestrator | 2025-08-29 21:26:56.306170 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-08-29 21:26:56.306188 | orchestrator | Friday 29 August 2025 21:26:48 +0000 (0:00:00.274) 0:00:06.965 ********* 2025-08-29 21:26:56.306219 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:56.306236 | orchestrator | 2025-08-29 21:26:56.306254 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-08-29 21:26:56.306272 | orchestrator | Friday 29 August 2025 21:26:48 +0000 (0:00:00.116) 0:00:07.082 ********* 2025-08-29 21:26:56.306289 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:26:56.306306 | orchestrator | 2025-08-29 21:26:56.306324 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-08-29 21:26:56.306342 | orchestrator | Friday 29 August 2025 21:26:50 +0000 (0:00:01.896) 0:00:08.979 ********* 2025-08-29 21:26:56.306359 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:56.306376 | orchestrator | 2025-08-29 21:26:56.306394 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-08-29 21:26:56.306412 | orchestrator | Friday 29 August 2025 21:26:50 +0000 (0:00:00.249) 0:00:09.229 ********* 2025-08-29 21:26:56.306429 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:56.306446 | orchestrator | 2025-08-29 21:26:56.306464 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-08-29 21:26:56.306482 | orchestrator | Friday 29 August 2025 21:26:51 +0000 (0:00:00.324) 0:00:09.553 ********* 2025-08-29 21:26:56.306498 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:56.306516 | orchestrator | 2025-08-29 21:26:56.306533 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-08-29 21:26:56.306551 | orchestrator | Friday 29 August 2025 21:26:51 +0000 (0:00:00.295) 0:00:09.848 ********* 2025-08-29 21:26:56.306568 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:26:56.306585 | orchestrator | 2025-08-29 21:26:56.306632 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 21:26:56.306657 | orchestrator | Friday 29 August 2025 21:26:51 +0000 (0:00:00.147) 0:00:09.995 ********* 2025-08-29 21:26:56.306674 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:56.306691 | orchestrator | 2025-08-29 21:26:56.306707 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 21:26:56.306724 | orchestrator | Friday 29 August 2025 21:26:51 +0000 (0:00:00.243) 0:00:10.239 ********* 2025-08-29 21:26:56.306740 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:26:56.306757 | orchestrator | 2025-08-29 21:26:56.306774 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 21:26:56.306790 | orchestrator | Friday 29 August 2025 21:26:52 +0000 (0:00:00.230) 0:00:10.469 ********* 2025-08-29 21:26:56.306807 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:56.306823 | orchestrator | 2025-08-29 21:26:56.306840 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 21:26:56.306857 | orchestrator | Friday 29 August 2025 21:26:53 +0000 (0:00:01.248) 0:00:11.718 ********* 2025-08-29 21:26:56.306873 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:56.306890 | orchestrator | 2025-08-29 21:26:56.306906 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 21:26:56.306923 | orchestrator | Friday 29 August 2025 21:26:53 +0000 (0:00:00.250) 0:00:11.969 ********* 2025-08-29 21:26:56.306939 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:56.306956 | orchestrator | 2025-08-29 21:26:56.306972 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:56.306989 | orchestrator | Friday 29 August 2025 21:26:53 +0000 (0:00:00.246) 0:00:12.216 ********* 2025-08-29 21:26:56.307005 | orchestrator | 2025-08-29 21:26:56.307022 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:56.307039 | orchestrator | Friday 29 August 2025 21:26:53 +0000 (0:00:00.065) 0:00:12.282 ********* 2025-08-29 21:26:56.307055 | orchestrator | 2025-08-29 21:26:56.307071 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:26:56.307088 | orchestrator | Friday 29 August 2025 21:26:53 +0000 (0:00:00.066) 0:00:12.348 ********* 2025-08-29 21:26:56.307114 | orchestrator | 2025-08-29 21:26:56.307131 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 21:26:56.307148 | orchestrator | Friday 29 August 2025 21:26:54 +0000 (0:00:00.071) 0:00:12.419 ********* 2025-08-29 21:26:56.307164 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 21:26:56.307181 | orchestrator | 2025-08-29 21:26:56.307197 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 21:26:56.307213 | orchestrator | Friday 29 August 2025 21:26:55 +0000 (0:00:01.423) 0:00:13.843 ********* 2025-08-29 21:26:56.307230 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-08-29 21:26:56.307246 | orchestrator |  "msg": [ 2025-08-29 21:26:56.307263 | orchestrator |  "Validator run completed.", 2025-08-29 21:26:56.307280 | orchestrator |  "You can find the report file here:", 2025-08-29 21:26:56.307296 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-08-29T21:26:42+00:00-report.json", 2025-08-29 21:26:56.307314 | orchestrator |  "on the following host:", 2025-08-29 21:26:56.307330 | orchestrator |  "testbed-manager" 2025-08-29 21:26:56.307347 | orchestrator |  ] 2025-08-29 21:26:56.307364 | orchestrator | } 2025-08-29 21:26:56.307380 | orchestrator | 2025-08-29 21:26:56.307396 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:26:56.307414 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 21:26:56.307433 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:26:56.307460 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:26:56.550278 | orchestrator | 2025-08-29 21:26:56.550393 | orchestrator | 2025-08-29 21:26:56.550411 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:26:56.550425 | orchestrator | Friday 29 August 2025 21:26:56 +0000 (0:00:00.790) 0:00:14.633 ********* 2025-08-29 21:26:56.550436 | orchestrator | =============================================================================== 2025-08-29 21:26:56.550447 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.90s 2025-08-29 21:26:56.550458 | orchestrator | Write report file ------------------------------------------------------- 1.42s 2025-08-29 21:26:56.550469 | orchestrator | Aggregate test results step one ----------------------------------------- 1.25s 2025-08-29 21:26:56.550479 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2025-08-29 21:26:56.550490 | orchestrator | Create report output directory ------------------------------------------ 0.87s 2025-08-29 21:26:56.550501 | orchestrator | Print report file information ------------------------------------------- 0.79s 2025-08-29 21:26:56.550511 | orchestrator | Aggregate test results step two ----------------------------------------- 0.69s 2025-08-29 21:26:56.550522 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-08-29 21:26:56.550533 | orchestrator | Set test result to passed if container is existing ---------------------- 0.46s 2025-08-29 21:26:56.550544 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-08-29 21:26:56.550554 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2025-08-29 21:26:56.550565 | orchestrator | Fail test if mgr modules are disabled that should be enabled ------------ 0.30s 2025-08-29 21:26:56.550576 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2025-08-29 21:26:56.550587 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-08-29 21:26:56.550630 | orchestrator | Fail due to missing containers ------------------------------------------ 0.27s 2025-08-29 21:26:56.550642 | orchestrator | Prepare test data for container existance test -------------------------- 0.27s 2025-08-29 21:26:56.550689 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.27s 2025-08-29 21:26:56.550701 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2025-08-29 21:26:56.550711 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-08-29 21:26:56.550722 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.25s 2025-08-29 21:26:56.809107 | orchestrator | + osism validate ceph-osds 2025-08-29 21:27:17.505233 | orchestrator | 2025-08-29 21:27:17.505351 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-08-29 21:27:17.505370 | orchestrator | 2025-08-29 21:27:17.505385 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 21:27:17.505401 | orchestrator | Friday 29 August 2025 21:27:12 +0000 (0:00:00.413) 0:00:00.413 ********* 2025-08-29 21:27:17.505417 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 21:27:17.505432 | orchestrator | 2025-08-29 21:27:17.505446 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 21:27:17.505461 | orchestrator | Friday 29 August 2025 21:27:13 +0000 (0:00:00.701) 0:00:01.114 ********* 2025-08-29 21:27:17.505475 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 21:27:17.505489 | orchestrator | 2025-08-29 21:27:17.505503 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 21:27:17.505519 | orchestrator | Friday 29 August 2025 21:27:13 +0000 (0:00:00.239) 0:00:01.353 ********* 2025-08-29 21:27:17.505533 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 21:27:17.505549 | orchestrator | 2025-08-29 21:27:17.505565 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 21:27:17.505581 | orchestrator | Friday 29 August 2025 21:27:15 +0000 (0:00:01.267) 0:00:02.621 ********* 2025-08-29 21:27:17.505597 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:17.505656 | orchestrator | 2025-08-29 21:27:17.505673 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-08-29 21:27:17.505689 | orchestrator | Friday 29 August 2025 21:27:15 +0000 (0:00:00.143) 0:00:02.765 ********* 2025-08-29 21:27:17.505705 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:17.505720 | orchestrator | 2025-08-29 21:27:17.505734 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-08-29 21:27:17.505749 | orchestrator | Friday 29 August 2025 21:27:15 +0000 (0:00:00.129) 0:00:02.895 ********* 2025-08-29 21:27:17.505765 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:17.505781 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:27:17.505796 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:27:17.505810 | orchestrator | 2025-08-29 21:27:17.505826 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-08-29 21:27:17.505842 | orchestrator | Friday 29 August 2025 21:27:15 +0000 (0:00:00.293) 0:00:03.189 ********* 2025-08-29 21:27:17.505858 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:17.505874 | orchestrator | 2025-08-29 21:27:17.505888 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-08-29 21:27:17.505903 | orchestrator | Friday 29 August 2025 21:27:15 +0000 (0:00:00.147) 0:00:03.336 ********* 2025-08-29 21:27:17.505916 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:17.505932 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:17.505946 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:17.505963 | orchestrator | 2025-08-29 21:27:17.505979 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-08-29 21:27:17.505994 | orchestrator | Friday 29 August 2025 21:27:16 +0000 (0:00:00.333) 0:00:03.669 ********* 2025-08-29 21:27:17.506007 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:17.506079 | orchestrator | 2025-08-29 21:27:17.506091 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 21:27:17.506101 | orchestrator | Friday 29 August 2025 21:27:16 +0000 (0:00:00.519) 0:00:04.189 ********* 2025-08-29 21:27:17.506111 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:17.506147 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:17.506156 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:17.506165 | orchestrator | 2025-08-29 21:27:17.506174 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-08-29 21:27:17.506182 | orchestrator | Friday 29 August 2025 21:27:17 +0000 (0:00:00.524) 0:00:04.714 ********* 2025-08-29 21:27:17.506207 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd060e33dcf8aa95b2833c999fd7a2c1169a9253026a9f7a4e18dbc87a81e3f07', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-08-29 21:27:17.506220 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6c4c9893c819624f728f461ff4505c87f61167ce8ce64a20cd1bcfb63ce4f181', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-08-29 21:27:17.506229 | orchestrator | skipping: [testbed-node-3] => (item={'id': '29940c56abd4a5f0a72a82d83b606a3c879d5ccd820709f48137141cd06da933', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-08-29 21:27:17.506246 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9a85f59673e5ed5e61df2388f10e5a8cd471c565cd442e7548912dd1bc330691', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-08-29 21:27:17.506255 | orchestrator | skipping: [testbed-node-3] => (item={'id': '82881cb1e15abe118bcf5f89c0545eeeb8c44f13bbc8a5c56ef0cf136d9fc089', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-08-29 21:27:17.506285 | orchestrator | skipping: [testbed-node-3] => (item={'id': '670f385610861f5002930e30a80aff3dd6bdcf1fac8f087837270b69c53cc554', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-08-29 21:27:17.506303 | orchestrator | skipping: [testbed-node-3] => (item={'id': '802cb5b2d7b093402476b0b863809b174215b55a258d4e9d9204d20bc2a24fcb', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-08-29 21:27:17.506312 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a305a9466480468b17b5b67675ef272991890f87fda40a1f295648711fc73e1d', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-08-29 21:27:17.506322 | orchestrator | skipping: [testbed-node-3] => (item={'id': '82afcb9d261e5f27fb249b0189bffa1ed17c850a7db5e30d40c36a506ece0cb9', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 21:27:17.506330 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9f58ad0687d7ba11085d668ec7051a6ef1eeeb53206af9d83d46d6323c30565d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-08-29 21:27:17.506340 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e017e11bc1f26d0bf6457260b8db7ad8d7bdb0f86fd8a0118ec5863f9a498a32', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-08-29 21:27:17.506349 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f0152a094931e2dc168e7267bfb76c514bd385fd7e8257a6606ec45687524f88', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-08-29 21:27:17.506367 | orchestrator | ok: [testbed-node-3] => (item={'id': 'd7261d72c9d8ea3c3f0d52e23db4aab52f21c36dae89b5e3131c27a25c093929', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 21:27:17.506377 | orchestrator | ok: [testbed-node-3] => (item={'id': 'c6f044092809fd844acd481d085002899b7dc9889e36fdbafc799928e047b37f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 21:27:17.506386 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ad74bf57a7f4efdd4732281f880040457db91846da506c80c193bd9b099e5932', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-08-29 21:27:17.506395 | orchestrator | skipping: [testbed-node-3] => (item={'id': '235cd8ac4dab464e958b35dc4c8bd5b0e745c6a1c94a7a826273911ea9d9c12b', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 21:27:17.506404 | orchestrator | skipping: [testbed-node-3] => (item={'id': '70b1096a8ee606302b75462c73175fb5c8f9e59543c462ddca4b2553004d952c', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 21:27:17.506413 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c5c9138dd71c04a90776bf85034c6bb06d41a322773562e83c9eb1ef54c635ff', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 21:27:17.506426 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a31c4d7bb9d9a9beeb96d9c2e271e947f36dc6897f678908d7d0b22a87b4655d', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 21:27:17.506435 | orchestrator | skipping: [testbed-node-3] => (item={'id': '505a1a142ed410abf4d835ba3cfc5229eeb76f54779af43358446cfee0d1ed99', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 21:27:17.506450 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c886b55b3632f14f23e922f66ee20114da157b4303734f160b2064b19015edb6', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-08-29 21:27:17.768356 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b12fe143c8dc4e6e331ba5028522fb671389e88a203974855aa3526220c1a764', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-08-29 21:27:17.768429 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eddde6d24938fd3afcd5ed0da29e4eb09cd9784d9ae019315e220caef0c56fd6', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-08-29 21:27:17.768441 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5a48903cdcc45462c6bbc7cd1b16976e55860d994ddc0d1ec2e0b1da25b0507e', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-08-29 21:27:17.768450 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6eed6a386d3304e9ffa588b5e99566eb1c3cd8f763d83a02bd01acbd673797b2', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-08-29 21:27:17.768459 | orchestrator | skipping: [testbed-node-4] => (item={'id': '58360f08017bba62111876df747ceb35f0522fb7d2f5074ede41ae48012a4b9e', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-08-29 21:27:17.768485 | orchestrator | skipping: [testbed-node-4] => (item={'id': '732fff48660962fd1fa5f8989050b9e7c5117774f5d817cbe4e93029e9f0ab15', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-08-29 21:27:17.768494 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd8310176c664405928dd367a0d0e112ad8ce735c44cf00d93f27e6eca1be91cb', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-08-29 21:27:17.768503 | orchestrator | skipping: [testbed-node-4] => (item={'id': '098dc9595c1df18e10a8f88e89140b478ca54b6ef6b8bf420a19ad5ea01a92f4', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 21:27:17.768512 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'adeddad2fa7342ce3ff20eb1e32211edd4c230dc8792c0f5666796fdf8b53546', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-08-29 21:27:17.768522 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd348c8c436dadb02af3ff44acc7450e8587d7051f57eb5ce1a6b0da43084cde6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-08-29 21:27:17.768531 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'aced8d682d78622b9e7dabe3cb799b64844a8113c497dae87c89d763d9f4d8a9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-08-29 21:27:17.768543 | orchestrator | ok: [testbed-node-4] => (item={'id': 'd94ba8b127a93fcf76e814513151736421291199220cae86701b7234dec14443', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 21:27:17.768552 | orchestrator | ok: [testbed-node-4] => (item={'id': '1a94359b801982fae6ad9edcade317e7076e82921cc3691d7338262d4a718b60', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 21:27:17.768561 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3e203f51fd5822c03d1bc16efb04f07043f9757dd2371a7a249c260a4171b94d', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-08-29 21:27:17.768582 | orchestrator | skipping: [testbed-node-4] => (item={'id': '01405db89669d67fa766f871a5053ddf70d1f0cb1ead1ef24caef685d72cdac2', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 21:27:17.768591 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3491b3b4f33522a4c0b78b2f9ec5af4176a8365ea0aa0b30ccfa9e11f650c499', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 21:27:17.768600 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b3ed4e1b720e59a92e5089db819d757a3a4bb46974b706f428ef6aa1da988f50', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 21:27:17.768639 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c3c4f204fc4f8edeceaa8e41a5e4a855f87e4c45bd17256be61ffb2e195468b7', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 21:27:17.768653 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0fa64207e1fc217d5ec596b6da1dc3a42df7bacd19cb1c29751e8dc9ed831a52', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 21:27:17.768662 | orchestrator | skipping: [testbed-node-5] => (item={'id': '42026ce1ac0501f57ea37ab496c202b227850ac711a0155b0c09034a157722df', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-08-29 21:27:17.768671 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3149d16fc07edaa2d232fce548f7554be22939257a9aa4cf2552dfbfa4be0389', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-08-29 21:27:17.768680 | orchestrator | skipping: [testbed-node-5] => (item={'id': '10efd74044abb1d91a0a354abd23f12d2db4287a3d7798c755fb2a5717de8f1f', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-08-29 21:27:17.768688 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ce0087744d7a73fb84ca3d9fd32b6ff42deee275db0b13be2d48a4616d3350a2', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-08-29 21:27:17.768697 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f30aa7aeac3441ae689f7ba2ec8ea3ec5ace5a07e3e7751f653649ebe10b29f0', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-08-29 21:27:17.768706 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e47202b0f65818fa02ee33e7f1fe22df6ac67301b7c5545f6dfa098d38f5870f', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-08-29 21:27:17.768715 | orchestrator | skipping: [testbed-node-5] => (item={'id': '55ec36f06e499065bfdc09b994ec6b45281c494538a45d5c7357fe124e798b48', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-08-29 21:27:17.768734 | orchestrator | skipping: [testbed-node-5] => (item={'id': '53e6ed398bd23ce2f67d3a85e6bf4029b3417e5fd347d3de1e5ac5e29bd8dba4', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-08-29 21:27:17.768743 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd3c526a6c72aeab12a81200cdc5e0f81fcc251120428f11887807aa1e5f8e7c2', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 21:27:17.768752 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6abc2ca76d73379f33bf99ddf8daf72edc29d6430083de53c28173dffcd9b3d6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-08-29 21:27:17.768766 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3a83a3427d29c9359cc5e7a138e3140813c71b0caaa0113379c3f6ad868aef80', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-08-29 21:27:25.297852 | orchestrator | skipping: [testbed-node-5] => (item={'id': '664ca3cfbd6332ac08176109aad0c33b83d52f093829965cd5ead64589677dfb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-08-29 21:27:25.297966 | orchestrator | ok: [testbed-node-5] => (item={'id': '7017f9b85034260c05b67aa8930628db09214b84efd273eac7386f36ff1bfc86', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 21:27:25.298007 | orchestrator | ok: [testbed-node-5] => (item={'id': '0f73dac0865603fbb265f9ca18f25b1d8ce80818742fc86f4c5737ad8a3bff76', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 21:27:25.298093 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9ba34536ae123968b1ec8d2352393a0424d2a2fdb50efc6ea21b8798351cd38b', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-08-29 21:27:25.298118 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1c809a763d7635262fdf253926bf8b6e7f7b1660ffe7562cea79dca397eb46c5', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 21:27:25.298138 | orchestrator | skipping: [testbed-node-5] => (item={'id': '18c4c6aeafcc4d5090327467e9b91f6fd0c9f21d228a5fbb4cd0fe23def287a2', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 21:27:25.298150 | orchestrator | skipping: [testbed-node-5] => (item={'id': '440d1bc952ed7cb3bba10537a4017c94e712f5eec73ba6d158625f8ad82bd5d5', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 21:27:25.298161 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eae99f814b227e83b4b0840f73ab2ca0452674e8bdd3a0078eb02bc04c39ed0e', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 21:27:25.298172 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e3e8eac8f8a5bd2012cd3a3943454bd1885db875099d1ac78637e10c35a5dcea', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 21:27:25.298182 | orchestrator | 2025-08-29 21:27:25.298195 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-08-29 21:27:25.298208 | orchestrator | Friday 29 August 2025 21:27:17 +0000 (0:00:00.543) 0:00:05.257 ********* 2025-08-29 21:27:25.298219 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:25.298230 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:25.298241 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:25.298251 | orchestrator | 2025-08-29 21:27:25.298262 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-08-29 21:27:25.298274 | orchestrator | Friday 29 August 2025 21:27:18 +0000 (0:00:00.296) 0:00:05.553 ********* 2025-08-29 21:27:25.298292 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:25.298309 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:27:25.298328 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:27:25.298346 | orchestrator | 2025-08-29 21:27:25.298365 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-08-29 21:27:25.298384 | orchestrator | Friday 29 August 2025 21:27:18 +0000 (0:00:00.300) 0:00:05.854 ********* 2025-08-29 21:27:25.298403 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:25.298439 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:25.298459 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:25.298478 | orchestrator | 2025-08-29 21:27:25.298499 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 21:27:25.298517 | orchestrator | Friday 29 August 2025 21:27:18 +0000 (0:00:00.482) 0:00:06.337 ********* 2025-08-29 21:27:25.298537 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:25.298556 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:25.298574 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:25.298635 | orchestrator | 2025-08-29 21:27:25.298656 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-08-29 21:27:25.298673 | orchestrator | Friday 29 August 2025 21:27:19 +0000 (0:00:00.294) 0:00:06.631 ********* 2025-08-29 21:27:25.298691 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-08-29 21:27:25.298711 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-08-29 21:27:25.298730 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:25.298750 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-08-29 21:27:25.298769 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-08-29 21:27:25.298814 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:27:25.298835 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-08-29 21:27:25.298855 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-08-29 21:27:25.298873 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:27:25.298892 | orchestrator | 2025-08-29 21:27:25.298909 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-08-29 21:27:25.298928 | orchestrator | Friday 29 August 2025 21:27:19 +0000 (0:00:00.320) 0:00:06.951 ********* 2025-08-29 21:27:25.298947 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:25.298966 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:25.298986 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:25.299005 | orchestrator | 2025-08-29 21:27:25.299025 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-08-29 21:27:25.299044 | orchestrator | Friday 29 August 2025 21:27:19 +0000 (0:00:00.300) 0:00:07.252 ********* 2025-08-29 21:27:25.299060 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:25.299078 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:27:25.299097 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:27:25.299117 | orchestrator | 2025-08-29 21:27:25.299137 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-08-29 21:27:25.299157 | orchestrator | Friday 29 August 2025 21:27:20 +0000 (0:00:00.546) 0:00:07.798 ********* 2025-08-29 21:27:25.299175 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:25.299195 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:27:25.299213 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:27:25.299230 | orchestrator | 2025-08-29 21:27:25.299246 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-08-29 21:27:25.299266 | orchestrator | Friday 29 August 2025 21:27:20 +0000 (0:00:00.305) 0:00:08.104 ********* 2025-08-29 21:27:25.299286 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:25.299305 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:25.299324 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:25.299344 | orchestrator | 2025-08-29 21:27:25.299363 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 21:27:25.299382 | orchestrator | Friday 29 August 2025 21:27:20 +0000 (0:00:00.294) 0:00:08.399 ********* 2025-08-29 21:27:25.299402 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:25.299422 | orchestrator | 2025-08-29 21:27:25.299441 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 21:27:25.299459 | orchestrator | Friday 29 August 2025 21:27:21 +0000 (0:00:00.269) 0:00:08.668 ********* 2025-08-29 21:27:25.299477 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:25.299496 | orchestrator | 2025-08-29 21:27:25.299517 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 21:27:25.299537 | orchestrator | Friday 29 August 2025 21:27:21 +0000 (0:00:00.269) 0:00:08.938 ********* 2025-08-29 21:27:25.299556 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:25.299574 | orchestrator | 2025-08-29 21:27:25.299590 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:27:25.299655 | orchestrator | Friday 29 August 2025 21:27:21 +0000 (0:00:00.232) 0:00:09.171 ********* 2025-08-29 21:27:25.299673 | orchestrator | 2025-08-29 21:27:25.299689 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:27:25.299706 | orchestrator | Friday 29 August 2025 21:27:21 +0000 (0:00:00.064) 0:00:09.235 ********* 2025-08-29 21:27:25.299723 | orchestrator | 2025-08-29 21:27:25.299739 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:27:25.299756 | orchestrator | Friday 29 August 2025 21:27:21 +0000 (0:00:00.061) 0:00:09.296 ********* 2025-08-29 21:27:25.299773 | orchestrator | 2025-08-29 21:27:25.299789 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 21:27:25.299807 | orchestrator | Friday 29 August 2025 21:27:22 +0000 (0:00:00.225) 0:00:09.522 ********* 2025-08-29 21:27:25.299824 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:25.299840 | orchestrator | 2025-08-29 21:27:25.299858 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-08-29 21:27:25.299877 | orchestrator | Friday 29 August 2025 21:27:22 +0000 (0:00:00.276) 0:00:09.799 ********* 2025-08-29 21:27:25.299895 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:25.299914 | orchestrator | 2025-08-29 21:27:25.299933 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 21:27:25.299953 | orchestrator | Friday 29 August 2025 21:27:22 +0000 (0:00:00.266) 0:00:10.066 ********* 2025-08-29 21:27:25.299973 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:25.299991 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:25.300010 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:25.300029 | orchestrator | 2025-08-29 21:27:25.300058 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-08-29 21:27:25.300074 | orchestrator | Friday 29 August 2025 21:27:22 +0000 (0:00:00.286) 0:00:10.352 ********* 2025-08-29 21:27:25.300094 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:25.300114 | orchestrator | 2025-08-29 21:27:25.300133 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-08-29 21:27:25.300152 | orchestrator | Friday 29 August 2025 21:27:23 +0000 (0:00:00.272) 0:00:10.624 ********* 2025-08-29 21:27:25.300171 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 21:27:25.300190 | orchestrator | 2025-08-29 21:27:25.300209 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-08-29 21:27:25.300230 | orchestrator | Friday 29 August 2025 21:27:24 +0000 (0:00:01.621) 0:00:12.246 ********* 2025-08-29 21:27:25.300249 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:25.300268 | orchestrator | 2025-08-29 21:27:25.300286 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-08-29 21:27:25.300305 | orchestrator | Friday 29 August 2025 21:27:24 +0000 (0:00:00.148) 0:00:12.394 ********* 2025-08-29 21:27:25.300324 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:25.300344 | orchestrator | 2025-08-29 21:27:25.300364 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-08-29 21:27:25.300385 | orchestrator | Friday 29 August 2025 21:27:25 +0000 (0:00:00.285) 0:00:12.680 ********* 2025-08-29 21:27:25.300414 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:38.106178 | orchestrator | 2025-08-29 21:27:38.106289 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-08-29 21:27:38.106304 | orchestrator | Friday 29 August 2025 21:27:25 +0000 (0:00:00.117) 0:00:12.797 ********* 2025-08-29 21:27:38.106315 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:38.106326 | orchestrator | 2025-08-29 21:27:38.106336 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 21:27:38.106345 | orchestrator | Friday 29 August 2025 21:27:25 +0000 (0:00:00.146) 0:00:12.944 ********* 2025-08-29 21:27:38.106355 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:38.106365 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:38.106374 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:38.106384 | orchestrator | 2025-08-29 21:27:38.106393 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-08-29 21:27:38.106427 | orchestrator | Friday 29 August 2025 21:27:25 +0000 (0:00:00.474) 0:00:13.418 ********* 2025-08-29 21:27:38.106444 | orchestrator | changed: [testbed-node-3] 2025-08-29 21:27:38.106460 | orchestrator | changed: [testbed-node-5] 2025-08-29 21:27:38.106470 | orchestrator | changed: [testbed-node-4] 2025-08-29 21:27:38.106479 | orchestrator | 2025-08-29 21:27:38.106489 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-08-29 21:27:38.106498 | orchestrator | Friday 29 August 2025 21:27:28 +0000 (0:00:02.318) 0:00:15.736 ********* 2025-08-29 21:27:38.106508 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:38.106517 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:38.106526 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:38.106536 | orchestrator | 2025-08-29 21:27:38.106545 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-08-29 21:27:38.106555 | orchestrator | Friday 29 August 2025 21:27:28 +0000 (0:00:00.302) 0:00:16.038 ********* 2025-08-29 21:27:38.106564 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:38.106574 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:38.106583 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:38.106592 | orchestrator | 2025-08-29 21:27:38.106602 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-08-29 21:27:38.106643 | orchestrator | Friday 29 August 2025 21:27:29 +0000 (0:00:00.512) 0:00:16.551 ********* 2025-08-29 21:27:38.106652 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:38.106663 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:27:38.106672 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:27:38.106682 | orchestrator | 2025-08-29 21:27:38.106691 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-08-29 21:27:38.106701 | orchestrator | Friday 29 August 2025 21:27:29 +0000 (0:00:00.499) 0:00:17.051 ********* 2025-08-29 21:27:38.106711 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:38.106721 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:38.106731 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:38.106742 | orchestrator | 2025-08-29 21:27:38.106753 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-08-29 21:27:38.106764 | orchestrator | Friday 29 August 2025 21:27:29 +0000 (0:00:00.305) 0:00:17.357 ********* 2025-08-29 21:27:38.106774 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:38.106785 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:27:38.106796 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:27:38.106806 | orchestrator | 2025-08-29 21:27:38.106817 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-08-29 21:27:38.106828 | orchestrator | Friday 29 August 2025 21:27:30 +0000 (0:00:00.265) 0:00:17.622 ********* 2025-08-29 21:27:38.106839 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:38.106849 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:27:38.106861 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:27:38.106871 | orchestrator | 2025-08-29 21:27:38.106882 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 21:27:38.106893 | orchestrator | Friday 29 August 2025 21:27:30 +0000 (0:00:00.351) 0:00:17.974 ********* 2025-08-29 21:27:38.106904 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:38.106915 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:38.106926 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:38.106937 | orchestrator | 2025-08-29 21:27:38.106948 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-08-29 21:27:38.106958 | orchestrator | Friday 29 August 2025 21:27:31 +0000 (0:00:00.736) 0:00:18.710 ********* 2025-08-29 21:27:38.106968 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:38.106979 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:38.106989 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:38.107001 | orchestrator | 2025-08-29 21:27:38.107012 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-08-29 21:27:38.107023 | orchestrator | Friday 29 August 2025 21:27:31 +0000 (0:00:00.483) 0:00:19.194 ********* 2025-08-29 21:27:38.107041 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:38.107051 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:38.107063 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:38.107074 | orchestrator | 2025-08-29 21:27:38.107083 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-08-29 21:27:38.107093 | orchestrator | Friday 29 August 2025 21:27:31 +0000 (0:00:00.295) 0:00:19.489 ********* 2025-08-29 21:27:38.107103 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:38.107112 | orchestrator | skipping: [testbed-node-4] 2025-08-29 21:27:38.107122 | orchestrator | skipping: [testbed-node-5] 2025-08-29 21:27:38.107131 | orchestrator | 2025-08-29 21:27:38.107141 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-08-29 21:27:38.107150 | orchestrator | Friday 29 August 2025 21:27:32 +0000 (0:00:00.310) 0:00:19.800 ********* 2025-08-29 21:27:38.107160 | orchestrator | ok: [testbed-node-3] 2025-08-29 21:27:38.107169 | orchestrator | ok: [testbed-node-4] 2025-08-29 21:27:38.107179 | orchestrator | ok: [testbed-node-5] 2025-08-29 21:27:38.107188 | orchestrator | 2025-08-29 21:27:38.107198 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 21:27:38.107208 | orchestrator | Friday 29 August 2025 21:27:32 +0000 (0:00:00.496) 0:00:20.297 ********* 2025-08-29 21:27:38.107217 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 21:27:38.107227 | orchestrator | 2025-08-29 21:27:38.107237 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 21:27:38.107246 | orchestrator | Friday 29 August 2025 21:27:33 +0000 (0:00:00.273) 0:00:20.570 ********* 2025-08-29 21:27:38.107256 | orchestrator | skipping: [testbed-node-3] 2025-08-29 21:27:38.107265 | orchestrator | 2025-08-29 21:27:38.107302 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 21:27:38.107321 | orchestrator | Friday 29 August 2025 21:27:33 +0000 (0:00:00.267) 0:00:20.837 ********* 2025-08-29 21:27:38.107338 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 21:27:38.107355 | orchestrator | 2025-08-29 21:27:38.107373 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 21:27:38.107390 | orchestrator | Friday 29 August 2025 21:27:35 +0000 (0:00:01.669) 0:00:22.507 ********* 2025-08-29 21:27:38.107403 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 21:27:38.107413 | orchestrator | 2025-08-29 21:27:38.107422 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 21:27:38.107432 | orchestrator | Friday 29 August 2025 21:27:35 +0000 (0:00:00.284) 0:00:22.791 ********* 2025-08-29 21:27:38.107441 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 21:27:38.107451 | orchestrator | 2025-08-29 21:27:38.107460 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:27:38.107470 | orchestrator | Friday 29 August 2025 21:27:35 +0000 (0:00:00.248) 0:00:23.040 ********* 2025-08-29 21:27:38.107479 | orchestrator | 2025-08-29 21:27:38.107493 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:27:38.107507 | orchestrator | Friday 29 August 2025 21:27:35 +0000 (0:00:00.064) 0:00:23.104 ********* 2025-08-29 21:27:38.107517 | orchestrator | 2025-08-29 21:27:38.107527 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 21:27:38.107537 | orchestrator | Friday 29 August 2025 21:27:35 +0000 (0:00:00.063) 0:00:23.168 ********* 2025-08-29 21:27:38.107546 | orchestrator | 2025-08-29 21:27:38.107555 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 21:27:38.107565 | orchestrator | Friday 29 August 2025 21:27:35 +0000 (0:00:00.066) 0:00:23.235 ********* 2025-08-29 21:27:38.107642 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 21:27:38.107654 | orchestrator | 2025-08-29 21:27:38.107664 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 21:27:38.107674 | orchestrator | Friday 29 August 2025 21:27:37 +0000 (0:00:01.552) 0:00:24.787 ********* 2025-08-29 21:27:38.107692 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-08-29 21:27:38.107701 | orchestrator |  "msg": [ 2025-08-29 21:27:38.107712 | orchestrator |  "Validator run completed.", 2025-08-29 21:27:38.107721 | orchestrator |  "You can find the report file here:", 2025-08-29 21:27:38.107731 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-08-29T21:27:13+00:00-report.json", 2025-08-29 21:27:38.107742 | orchestrator |  "on the following host:", 2025-08-29 21:27:38.107752 | orchestrator |  "testbed-manager" 2025-08-29 21:27:38.107762 | orchestrator |  ] 2025-08-29 21:27:38.107772 | orchestrator | } 2025-08-29 21:27:38.107781 | orchestrator | 2025-08-29 21:27:38.107791 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:27:38.107801 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-08-29 21:27:38.107813 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 21:27:38.107823 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 21:27:38.107832 | orchestrator | 2025-08-29 21:27:38.107842 | orchestrator | 2025-08-29 21:27:38.107851 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:27:38.107861 | orchestrator | Friday 29 August 2025 21:27:38 +0000 (0:00:00.795) 0:00:25.582 ********* 2025-08-29 21:27:38.107870 | orchestrator | =============================================================================== 2025-08-29 21:27:38.107879 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.32s 2025-08-29 21:27:38.107889 | orchestrator | Aggregate test results step one ----------------------------------------- 1.67s 2025-08-29 21:27:38.107898 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.62s 2025-08-29 21:27:38.107908 | orchestrator | Write report file ------------------------------------------------------- 1.55s 2025-08-29 21:27:38.107921 | orchestrator | Create report output directory ------------------------------------------ 1.27s 2025-08-29 21:27:38.107931 | orchestrator | Print report file information ------------------------------------------- 0.80s 2025-08-29 21:27:38.107941 | orchestrator | Prepare test data ------------------------------------------------------- 0.74s 2025-08-29 21:27:38.107950 | orchestrator | Get timestamp for report file ------------------------------------------- 0.70s 2025-08-29 21:27:38.107959 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.55s 2025-08-29 21:27:38.107969 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.54s 2025-08-29 21:27:38.107978 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2025-08-29 21:27:38.107988 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.52s 2025-08-29 21:27:38.107997 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2025-08-29 21:27:38.108006 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.50s 2025-08-29 21:27:38.108016 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.50s 2025-08-29 21:27:38.108025 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.48s 2025-08-29 21:27:38.108043 | orchestrator | Set test result to passed if count matches ------------------------------ 0.48s 2025-08-29 21:27:38.407371 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2025-08-29 21:27:38.407456 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.35s 2025-08-29 21:27:38.407467 | orchestrator | Flush handlers ---------------------------------------------------------- 0.35s 2025-08-29 21:27:38.700566 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-08-29 21:27:38.708461 | orchestrator | + set -e 2025-08-29 21:27:38.708496 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 21:27:38.708508 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 21:27:38.708519 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 21:27:38.708529 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 21:27:38.708538 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 21:27:38.708548 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 21:27:38.708559 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 21:27:38.708568 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 21:27:38.708578 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 21:27:38.708587 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 21:27:38.708597 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 21:27:38.708638 | orchestrator | ++ export ARA=false 2025-08-29 21:27:38.708657 | orchestrator | ++ ARA=false 2025-08-29 21:27:38.708673 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 21:27:38.708685 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 21:27:38.708695 | orchestrator | ++ export TEMPEST=false 2025-08-29 21:27:38.708704 | orchestrator | ++ TEMPEST=false 2025-08-29 21:27:38.708713 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 21:27:38.708723 | orchestrator | ++ IS_ZUUL=true 2025-08-29 21:27:38.708732 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-08-29 21:27:38.708742 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.51 2025-08-29 21:27:38.708751 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 21:27:38.708765 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 21:27:38.708782 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 21:27:38.708798 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 21:27:38.708815 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 21:27:38.708834 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 21:27:38.708852 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 21:27:38.708869 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 21:27:38.708895 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 21:27:38.708913 | orchestrator | + source /etc/os-release 2025-08-29 21:27:38.708930 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-08-29 21:27:38.708946 | orchestrator | ++ NAME=Ubuntu 2025-08-29 21:27:38.708964 | orchestrator | ++ VERSION_ID=24.04 2025-08-29 21:27:38.708981 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-08-29 21:27:38.708994 | orchestrator | ++ VERSION_CODENAME=noble 2025-08-29 21:27:38.709004 | orchestrator | ++ ID=ubuntu 2025-08-29 21:27:38.709013 | orchestrator | ++ ID_LIKE=debian 2025-08-29 21:27:38.709022 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-08-29 21:27:38.709032 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-08-29 21:27:38.709041 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-08-29 21:27:38.709052 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-08-29 21:27:38.709062 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-08-29 21:27:38.709071 | orchestrator | ++ LOGO=ubuntu-logo 2025-08-29 21:27:38.709081 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-08-29 21:27:38.709091 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-08-29 21:27:38.709101 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-08-29 21:27:38.735696 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-08-29 21:28:00.896774 | orchestrator | 2025-08-29 21:28:00.896879 | orchestrator | # Status of Elasticsearch 2025-08-29 21:28:00.896895 | orchestrator | 2025-08-29 21:28:00.896908 | orchestrator | + pushd /opt/configuration/contrib 2025-08-29 21:28:00.896920 | orchestrator | + echo 2025-08-29 21:28:00.896932 | orchestrator | + echo '# Status of Elasticsearch' 2025-08-29 21:28:00.896943 | orchestrator | + echo 2025-08-29 21:28:00.896954 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-08-29 21:28:01.074532 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-08-29 21:28:01.074678 | orchestrator | 2025-08-29 21:28:01.074696 | orchestrator | # Status of MariaDB 2025-08-29 21:28:01.074707 | orchestrator | 2025-08-29 21:28:01.074718 | orchestrator | + echo 2025-08-29 21:28:01.074728 | orchestrator | + echo '# Status of MariaDB' 2025-08-29 21:28:01.074762 | orchestrator | + echo 2025-08-29 21:28:01.074773 | orchestrator | + MARIADB_USER=root_shard_0 2025-08-29 21:28:01.074784 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-08-29 21:28:01.125491 | orchestrator | Reading package lists... 2025-08-29 21:28:01.464279 | orchestrator | Building dependency tree... 2025-08-29 21:28:01.464377 | orchestrator | Reading state information... 2025-08-29 21:28:01.851356 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-08-29 21:28:01.851463 | orchestrator | bc set to manually installed. 2025-08-29 21:28:01.851478 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-08-29 21:28:02.509890 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-08-29 21:28:02.509985 | orchestrator | 2025-08-29 21:28:02.510001 | orchestrator | # Status of Prometheus 2025-08-29 21:28:02.510013 | orchestrator | 2025-08-29 21:28:02.510059 | orchestrator | + echo 2025-08-29 21:28:02.510070 | orchestrator | + echo '# Status of Prometheus' 2025-08-29 21:28:02.510080 | orchestrator | + echo 2025-08-29 21:28:02.510107 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-08-29 21:28:02.572492 | orchestrator | Unauthorized 2025-08-29 21:28:02.577014 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-08-29 21:28:02.651305 | orchestrator | Unauthorized 2025-08-29 21:28:02.654445 | orchestrator | 2025-08-29 21:28:02.654495 | orchestrator | # Status of RabbitMQ 2025-08-29 21:28:02.654509 | orchestrator | 2025-08-29 21:28:02.654520 | orchestrator | + echo 2025-08-29 21:28:02.654532 | orchestrator | + echo '# Status of RabbitMQ' 2025-08-29 21:28:02.654543 | orchestrator | + echo 2025-08-29 21:28:02.654555 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-08-29 21:28:03.082201 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-08-29 21:28:03.090953 | orchestrator | 2025-08-29 21:28:03.091002 | orchestrator | # Status of Redis 2025-08-29 21:28:03.091021 | orchestrator | 2025-08-29 21:28:03.091037 | orchestrator | + echo 2025-08-29 21:28:03.091047 | orchestrator | + echo '# Status of Redis' 2025-08-29 21:28:03.091056 | orchestrator | + echo 2025-08-29 21:28:03.091067 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-08-29 21:28:03.095437 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001788s;;;0.000000;10.000000 2025-08-29 21:28:03.095986 | orchestrator | 2025-08-29 21:28:03.096075 | orchestrator | # Create backup of MariaDB database 2025-08-29 21:28:03.096091 | orchestrator | 2025-08-29 21:28:03.096103 | orchestrator | + popd 2025-08-29 21:28:03.096115 | orchestrator | + echo 2025-08-29 21:28:03.096126 | orchestrator | + echo '# Create backup of MariaDB database' 2025-08-29 21:28:03.096137 | orchestrator | + echo 2025-08-29 21:28:03.096148 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-08-29 21:28:04.996002 | orchestrator | 2025-08-29 21:28:04 | INFO  | Task 9022859d-677e-4b66-a078-edb38e093ff5 (mariadb_backup) was prepared for execution. 2025-08-29 21:28:04.996118 | orchestrator | 2025-08-29 21:28:04 | INFO  | It takes a moment until task 9022859d-677e-4b66-a078-edb38e093ff5 (mariadb_backup) has been started and output is visible here. 2025-08-29 21:29:37.137053 | orchestrator | 2025-08-29 21:29:37.137167 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 21:29:37.137184 | orchestrator | 2025-08-29 21:29:37.137196 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 21:29:37.137208 | orchestrator | Friday 29 August 2025 21:28:08 +0000 (0:00:00.173) 0:00:00.173 ********* 2025-08-29 21:29:37.137219 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:29:37.137231 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:29:37.137242 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:29:37.137253 | orchestrator | 2025-08-29 21:29:37.137264 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 21:29:37.137275 | orchestrator | Friday 29 August 2025 21:28:09 +0000 (0:00:00.313) 0:00:00.486 ********* 2025-08-29 21:29:37.137286 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 21:29:37.137297 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 21:29:37.137332 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 21:29:37.137344 | orchestrator | 2025-08-29 21:29:37.137355 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 21:29:37.137366 | orchestrator | 2025-08-29 21:29:37.137377 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 21:29:37.137389 | orchestrator | Friday 29 August 2025 21:28:09 +0000 (0:00:00.524) 0:00:01.010 ********* 2025-08-29 21:29:37.137400 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 21:29:37.137411 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 21:29:37.137422 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 21:29:37.137432 | orchestrator | 2025-08-29 21:29:37.137447 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 21:29:37.137465 | orchestrator | Friday 29 August 2025 21:28:10 +0000 (0:00:00.372) 0:00:01.383 ********* 2025-08-29 21:29:37.137481 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 21:29:37.137494 | orchestrator | 2025-08-29 21:29:37.137504 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-08-29 21:29:37.137515 | orchestrator | Friday 29 August 2025 21:28:10 +0000 (0:00:00.557) 0:00:01.941 ********* 2025-08-29 21:29:37.137575 | orchestrator | ok: [testbed-node-0] 2025-08-29 21:29:37.137589 | orchestrator | ok: [testbed-node-1] 2025-08-29 21:29:37.137601 | orchestrator | ok: [testbed-node-2] 2025-08-29 21:29:37.137613 | orchestrator | 2025-08-29 21:29:37.137626 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-08-29 21:29:37.137670 | orchestrator | Friday 29 August 2025 21:28:13 +0000 (0:00:02.809) 0:00:04.750 ********* 2025-08-29 21:29:37.137683 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 21:29:37.137696 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-08-29 21:29:37.137709 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 21:29:37.137721 | orchestrator | mariadb_bootstrap_restart 2025-08-29 21:29:37.137733 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:29:37.137745 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:29:37.137757 | orchestrator | changed: [testbed-node-0] 2025-08-29 21:29:37.137770 | orchestrator | 2025-08-29 21:29:37.137782 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 21:29:37.137794 | orchestrator | skipping: no hosts matched 2025-08-29 21:29:37.137806 | orchestrator | 2025-08-29 21:29:37.137819 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 21:29:37.137831 | orchestrator | skipping: no hosts matched 2025-08-29 21:29:37.137843 | orchestrator | 2025-08-29 21:29:37.137855 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 21:29:37.137867 | orchestrator | skipping: no hosts matched 2025-08-29 21:29:37.137879 | orchestrator | 2025-08-29 21:29:37.137891 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 21:29:37.137904 | orchestrator | 2025-08-29 21:29:37.137916 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 21:29:37.137928 | orchestrator | Friday 29 August 2025 21:29:36 +0000 (0:01:22.835) 0:01:27.586 ********* 2025-08-29 21:29:37.137940 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:29:37.137968 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:29:37.137980 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:29:37.137990 | orchestrator | 2025-08-29 21:29:37.138001 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 21:29:37.138012 | orchestrator | Friday 29 August 2025 21:29:36 +0000 (0:00:00.310) 0:01:27.897 ********* 2025-08-29 21:29:37.138083 | orchestrator | skipping: [testbed-node-0] 2025-08-29 21:29:37.138094 | orchestrator | skipping: [testbed-node-1] 2025-08-29 21:29:37.138105 | orchestrator | skipping: [testbed-node-2] 2025-08-29 21:29:37.138158 | orchestrator | 2025-08-29 21:29:37.138171 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:29:37.138183 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 21:29:37.138195 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 21:29:37.138206 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 21:29:37.138217 | orchestrator | 2025-08-29 21:29:37.138228 | orchestrator | 2025-08-29 21:29:37.138238 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:29:37.138249 | orchestrator | Friday 29 August 2025 21:29:36 +0000 (0:00:00.224) 0:01:28.122 ********* 2025-08-29 21:29:37.138260 | orchestrator | =============================================================================== 2025-08-29 21:29:37.138270 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 82.84s 2025-08-29 21:29:37.138301 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.81s 2025-08-29 21:29:37.138312 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2025-08-29 21:29:37.138323 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-08-29 21:29:37.138333 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.37s 2025-08-29 21:29:37.138344 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-08-29 21:29:37.138355 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-08-29 21:29:37.138366 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.23s 2025-08-29 21:29:37.459545 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-08-29 21:29:37.467809 | orchestrator | + set -e 2025-08-29 21:29:37.467838 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 21:29:37.467851 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 21:29:37.467863 | orchestrator | ++ INTERACTIVE=false 2025-08-29 21:29:37.467879 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 21:29:37.467890 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 21:29:37.468132 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 21:29:37.469303 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 21:29:37.475740 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 21:29:37.475765 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 21:29:37.475776 | orchestrator | + export OS_CLOUD=admin 2025-08-29 21:29:37.475787 | orchestrator | + OS_CLOUD=admin 2025-08-29 21:29:37.475798 | orchestrator | 2025-08-29 21:29:37.475809 | orchestrator | # OpenStack endpoints 2025-08-29 21:29:37.475820 | orchestrator | + echo 2025-08-29 21:29:37.475831 | orchestrator | + echo '# OpenStack endpoints' 2025-08-29 21:29:37.475842 | orchestrator | + echo 2025-08-29 21:29:37.475853 | orchestrator | 2025-08-29 21:29:37.475865 | orchestrator | + openstack endpoint list 2025-08-29 21:29:40.643950 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 21:29:40.644048 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-08-29 21:29:40.644062 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 21:29:40.644073 | orchestrator | | 0f9773c6065449d4a01d9c54fa570b98 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-08-29 21:29:40.644084 | orchestrator | | 1e82a13fafee4726becaa74fc564d037 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-08-29 21:29:40.644117 | orchestrator | | 21bcc11c887644e1af8b592208757e5e | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-08-29 21:29:40.644129 | orchestrator | | 23047f61cdc4489db5873512af590248 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-08-29 21:29:40.644140 | orchestrator | | 23c546cf059a44dea650798d192cf9e9 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-08-29 21:29:40.644150 | orchestrator | | 41ee909a95d745a5a7770c79c102f665 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-08-29 21:29:40.644161 | orchestrator | | 4bbb37584ab54839b1d7bd6cc4fc8a5c | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-08-29 21:29:40.644172 | orchestrator | | 50b73725729d4c1a956a1c9286c264e1 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-08-29 21:29:40.644183 | orchestrator | | 60002f58323c4d6eb1964656a3675037 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-08-29 21:29:40.644193 | orchestrator | | 62e63312aa004ed8a039068d9c7e1e46 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-08-29 21:29:40.644204 | orchestrator | | 6a53c158a3ea4de78c765e2168b4ccc2 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-08-29 21:29:40.644215 | orchestrator | | 8d779679a7404d148c2eb1bbaf3b0867 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-08-29 21:29:40.644225 | orchestrator | | a5457c37505a4ad09722ed6b629293fa | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-08-29 21:29:40.644236 | orchestrator | | adbcdd10b1f14b60a44951f7b14aafce | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-08-29 21:29:40.644247 | orchestrator | | b6689b78f067458c8899dfcc15b95ff2 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-08-29 21:29:40.644257 | orchestrator | | bb83d65b8f904cdab80dcbfcd38a3236 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-08-29 21:29:40.644268 | orchestrator | | bdff6dbe59e34744a195f7c1fd985460 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-08-29 21:29:40.644279 | orchestrator | | cf242a5041f948b0b08e371c991afb74 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-08-29 21:29:40.644290 | orchestrator | | e77012c229b0449d9fd98908b6dcb567 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-08-29 21:29:40.644300 | orchestrator | | f09790432dee44fe9dfbe18257e0608f | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-08-29 21:29:40.644344 | orchestrator | | f8958a301d3642e4a512257c62cf12e5 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-08-29 21:29:40.644357 | orchestrator | | fab8c3a430374faebaff31d6ceacecec | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-08-29 21:29:40.644368 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 21:29:40.889993 | orchestrator | 2025-08-29 21:29:40.890143 | orchestrator | # Cinder 2025-08-29 21:29:40.890159 | orchestrator | 2025-08-29 21:29:40.890171 | orchestrator | + echo 2025-08-29 21:29:40.890183 | orchestrator | + echo '# Cinder' 2025-08-29 21:29:40.890195 | orchestrator | + echo 2025-08-29 21:29:40.890206 | orchestrator | + openstack volume service list 2025-08-29 21:29:43.523348 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 21:29:43.523445 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-08-29 21:29:43.523461 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 21:29:43.523473 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-08-29T21:29:42.000000 | 2025-08-29 21:29:43.523484 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-08-29T21:29:42.000000 | 2025-08-29 21:29:43.523495 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-08-29T21:29:42.000000 | 2025-08-29 21:29:43.523506 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-08-29T21:29:41.000000 | 2025-08-29 21:29:43.523517 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-08-29T21:29:33.000000 | 2025-08-29 21:29:43.523527 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-08-29T21:29:34.000000 | 2025-08-29 21:29:43.523538 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-08-29T21:29:43.000000 | 2025-08-29 21:29:43.523549 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-08-29T21:29:33.000000 | 2025-08-29 21:29:43.523580 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-08-29T21:29:33.000000 | 2025-08-29 21:29:43.523592 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 21:29:43.757920 | orchestrator | 2025-08-29 21:29:43.758011 | orchestrator | # Neutron 2025-08-29 21:29:43.758080 | orchestrator | 2025-08-29 21:29:43.758099 | orchestrator | + echo 2025-08-29 21:29:43.758120 | orchestrator | + echo '# Neutron' 2025-08-29 21:29:43.758142 | orchestrator | + echo 2025-08-29 21:29:43.758161 | orchestrator | + openstack network agent list 2025-08-29 21:29:46.639689 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 21:29:46.639790 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-08-29 21:29:46.639803 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 21:29:46.639815 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-08-29 21:29:46.639825 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-08-29 21:29:46.639836 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-08-29 21:29:46.639846 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-08-29 21:29:46.639857 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-08-29 21:29:46.639867 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-08-29 21:29:46.639903 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 21:29:46.639914 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 21:29:46.639924 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 21:29:46.639935 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 21:29:46.879304 | orchestrator | + openstack network service provider list 2025-08-29 21:29:49.982779 | orchestrator | +---------------+------+---------+ 2025-08-29 21:29:49.982879 | orchestrator | | Service Type | Name | Default | 2025-08-29 21:29:49.982892 | orchestrator | +---------------+------+---------+ 2025-08-29 21:29:49.982903 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-08-29 21:29:49.982914 | orchestrator | +---------------+------+---------+ 2025-08-29 21:29:50.241255 | orchestrator | 2025-08-29 21:29:50.241340 | orchestrator | # Nova 2025-08-29 21:29:50.241353 | orchestrator | 2025-08-29 21:29:50.241364 | orchestrator | + echo 2025-08-29 21:29:50.241374 | orchestrator | + echo '# Nova' 2025-08-29 21:29:50.241384 | orchestrator | + echo 2025-08-29 21:29:50.241395 | orchestrator | + openstack compute service list 2025-08-29 21:29:53.505843 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 21:29:53.505940 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-08-29 21:29:53.505950 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 21:29:53.505959 | orchestrator | | a13305e6-c7e4-414b-975c-cd27b0158c43 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-08-29T21:29:45.000000 | 2025-08-29 21:29:53.505967 | orchestrator | | f67e381d-c2bf-4081-a872-3570911ea0e8 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-08-29T21:29:51.000000 | 2025-08-29 21:29:53.505975 | orchestrator | | 6b9cb197-5e99-4b4a-b36e-e886a474aebc | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-08-29T21:29:44.000000 | 2025-08-29 21:29:53.505983 | orchestrator | | c1e0e4f4-85f8-470d-aaec-3d155d23df25 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-08-29T21:29:52.000000 | 2025-08-29 21:29:53.505991 | orchestrator | | 5712d8d7-4b2d-4131-b9f7-46517c1a2f09 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-08-29T21:29:45.000000 | 2025-08-29 21:29:53.505998 | orchestrator | | 0e60b09b-e389-4c3e-84aa-bccd22d26e6f | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-08-29T21:29:46.000000 | 2025-08-29 21:29:53.506006 | orchestrator | | a4171b3a-ab2e-4fa9-b752-5d813c0ae3a7 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-08-29T21:29:43.000000 | 2025-08-29 21:29:53.506014 | orchestrator | | 8ea218ff-e609-49c6-8ff3-efaba9f02753 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-08-29T21:29:43.000000 | 2025-08-29 21:29:53.506082 | orchestrator | | 30ecd88d-aeb4-4218-a271-910a8fc12d81 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-08-29T21:29:43.000000 | 2025-08-29 21:29:53.506091 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 21:29:53.835151 | orchestrator | + openstack hypervisor list 2025-08-29 21:29:58.957107 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 21:29:58.957218 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-08-29 21:29:58.957232 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 21:29:58.957244 | orchestrator | | 44f98900-dc0c-4322-95cf-c3b73d729fdc | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-08-29 21:29:58.957283 | orchestrator | | 4555312d-27df-4a0d-942e-5cc00bd1cddd | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-08-29 21:29:58.957295 | orchestrator | | 354674e4-c3c5-42d4-b905-d38fc74bb241 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-08-29 21:29:58.957305 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 21:29:59.211393 | orchestrator | 2025-08-29 21:29:59.211489 | orchestrator | # Run OpenStack test play 2025-08-29 21:29:59.211505 | orchestrator | 2025-08-29 21:29:59.211517 | orchestrator | + echo 2025-08-29 21:29:59.211529 | orchestrator | + echo '# Run OpenStack test play' 2025-08-29 21:29:59.211541 | orchestrator | + echo 2025-08-29 21:29:59.211552 | orchestrator | + osism apply --environment openstack test 2025-08-29 21:30:01.150456 | orchestrator | 2025-08-29 21:30:01 | INFO  | Trying to run play test in environment openstack 2025-08-29 21:30:01.216767 | orchestrator | 2025-08-29 21:30:01 | INFO  | Task 77c5a844-167f-480d-9f5c-fbd83a2b9824 (test) was prepared for execution. 2025-08-29 21:30:01.216859 | orchestrator | 2025-08-29 21:30:01 | INFO  | It takes a moment until task 77c5a844-167f-480d-9f5c-fbd83a2b9824 (test) has been started and output is visible here. 2025-08-29 21:35:54.652349 | orchestrator | 2025-08-29 21:35:54.652492 | orchestrator | PLAY [Create test project] ***************************************************** 2025-08-29 21:35:54.652512 | orchestrator | 2025-08-29 21:35:54.652525 | orchestrator | TASK [Create test domain] ****************************************************** 2025-08-29 21:35:54.652538 | orchestrator | Friday 29 August 2025 21:30:05 +0000 (0:00:00.066) 0:00:00.066 ********* 2025-08-29 21:35:54.652549 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.652574 | orchestrator | 2025-08-29 21:35:54.652585 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-08-29 21:35:54.652597 | orchestrator | Friday 29 August 2025 21:30:08 +0000 (0:00:03.292) 0:00:03.359 ********* 2025-08-29 21:35:54.652608 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.652618 | orchestrator | 2025-08-29 21:35:54.652630 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-08-29 21:35:54.652641 | orchestrator | Friday 29 August 2025 21:30:12 +0000 (0:00:03.859) 0:00:07.218 ********* 2025-08-29 21:35:54.652651 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.652662 | orchestrator | 2025-08-29 21:35:54.652673 | orchestrator | TASK [Create test project] ***************************************************** 2025-08-29 21:35:54.652684 | orchestrator | Friday 29 August 2025 21:30:17 +0000 (0:00:05.581) 0:00:12.800 ********* 2025-08-29 21:35:54.652695 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.652706 | orchestrator | 2025-08-29 21:35:54.652717 | orchestrator | TASK [Create test user] ******************************************************** 2025-08-29 21:35:54.652728 | orchestrator | Friday 29 August 2025 21:30:21 +0000 (0:00:03.656) 0:00:16.457 ********* 2025-08-29 21:35:54.652738 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.652749 | orchestrator | 2025-08-29 21:35:54.652760 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-08-29 21:35:54.652771 | orchestrator | Friday 29 August 2025 21:30:25 +0000 (0:00:04.040) 0:00:20.497 ********* 2025-08-29 21:35:54.652782 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-08-29 21:35:54.652795 | orchestrator | changed: [localhost] => (item=member) 2025-08-29 21:35:54.652821 | orchestrator | changed: [localhost] => (item=creator) 2025-08-29 21:35:54.652833 | orchestrator | 2025-08-29 21:35:54.652846 | orchestrator | TASK [Create test server group] ************************************************ 2025-08-29 21:35:54.652858 | orchestrator | Friday 29 August 2025 21:30:37 +0000 (0:00:11.730) 0:00:32.228 ********* 2025-08-29 21:35:54.652870 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.652882 | orchestrator | 2025-08-29 21:35:54.652893 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-08-29 21:35:54.652905 | orchestrator | Friday 29 August 2025 21:30:41 +0000 (0:00:04.570) 0:00:36.798 ********* 2025-08-29 21:35:54.652917 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.652954 | orchestrator | 2025-08-29 21:35:54.653009 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-08-29 21:35:54.653023 | orchestrator | Friday 29 August 2025 21:30:46 +0000 (0:00:04.765) 0:00:41.563 ********* 2025-08-29 21:35:54.653035 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.653047 | orchestrator | 2025-08-29 21:35:54.653059 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-08-29 21:35:54.653077 | orchestrator | Friday 29 August 2025 21:30:50 +0000 (0:00:04.062) 0:00:45.626 ********* 2025-08-29 21:35:54.653095 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.653113 | orchestrator | 2025-08-29 21:35:54.653131 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-08-29 21:35:54.653149 | orchestrator | Friday 29 August 2025 21:30:54 +0000 (0:00:03.766) 0:00:49.392 ********* 2025-08-29 21:35:54.653167 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.653184 | orchestrator | 2025-08-29 21:35:54.653203 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-08-29 21:35:54.653220 | orchestrator | Friday 29 August 2025 21:30:58 +0000 (0:00:03.641) 0:00:53.034 ********* 2025-08-29 21:35:54.653240 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.653259 | orchestrator | 2025-08-29 21:35:54.653278 | orchestrator | TASK [Create test network topology] ******************************************** 2025-08-29 21:35:54.653327 | orchestrator | Friday 29 August 2025 21:31:02 +0000 (0:00:03.991) 0:00:57.026 ********* 2025-08-29 21:35:54.653339 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.653350 | orchestrator | 2025-08-29 21:35:54.653361 | orchestrator | TASK [Create test instances] *************************************************** 2025-08-29 21:35:54.653372 | orchestrator | Friday 29 August 2025 21:31:16 +0000 (0:00:14.725) 0:01:11.751 ********* 2025-08-29 21:35:54.653383 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 21:35:54.653394 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 21:35:54.653404 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 21:35:54.653415 | orchestrator | 2025-08-29 21:35:54.653426 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-08-29 21:35:54.653436 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 21:35:54.653447 | orchestrator | 2025-08-29 21:35:54.653458 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-08-29 21:35:54.653468 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 21:35:54.653479 | orchestrator | 2025-08-29 21:35:54.653489 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-08-29 21:35:54.653500 | orchestrator | Friday 29 August 2025 21:34:33 +0000 (0:03:16.768) 0:04:28.519 ********* 2025-08-29 21:35:54.653511 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 21:35:54.653522 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 21:35:54.653532 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 21:35:54.653543 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 21:35:54.653553 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 21:35:54.653564 | orchestrator | 2025-08-29 21:35:54.653575 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-08-29 21:35:54.653585 | orchestrator | Friday 29 August 2025 21:34:57 +0000 (0:00:23.539) 0:04:52.059 ********* 2025-08-29 21:35:54.653596 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 21:35:54.653606 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 21:35:54.653617 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 21:35:54.653628 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 21:35:54.653659 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 21:35:54.653671 | orchestrator | 2025-08-29 21:35:54.653682 | orchestrator | TASK [Create test volume] ****************************************************** 2025-08-29 21:35:54.653692 | orchestrator | Friday 29 August 2025 21:35:28 +0000 (0:00:31.797) 0:05:23.856 ********* 2025-08-29 21:35:54.653703 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.653714 | orchestrator | 2025-08-29 21:35:54.653749 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-08-29 21:35:54.653761 | orchestrator | Friday 29 August 2025 21:35:35 +0000 (0:00:06.941) 0:05:30.798 ********* 2025-08-29 21:35:54.653772 | orchestrator | changed: [localhost] 2025-08-29 21:35:54.653782 | orchestrator | 2025-08-29 21:35:54.653793 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-08-29 21:35:54.653804 | orchestrator | Friday 29 August 2025 21:35:49 +0000 (0:00:13.411) 0:05:44.210 ********* 2025-08-29 21:35:54.653815 | orchestrator | ok: [localhost] 2025-08-29 21:35:54.653826 | orchestrator | 2025-08-29 21:35:54.653837 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-08-29 21:35:54.653847 | orchestrator | Friday 29 August 2025 21:35:54 +0000 (0:00:05.114) 0:05:49.324 ********* 2025-08-29 21:35:54.653858 | orchestrator | ok: [localhost] => { 2025-08-29 21:35:54.653869 | orchestrator |  "msg": "192.168.112.190" 2025-08-29 21:35:54.653880 | orchestrator | } 2025-08-29 21:35:54.653891 | orchestrator | 2025-08-29 21:35:54.653902 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 21:35:54.653913 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 21:35:54.653925 | orchestrator | 2025-08-29 21:35:54.653936 | orchestrator | 2025-08-29 21:35:54.653947 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 21:35:54.653978 | orchestrator | Friday 29 August 2025 21:35:54 +0000 (0:00:00.047) 0:05:49.372 ********* 2025-08-29 21:35:54.653990 | orchestrator | =============================================================================== 2025-08-29 21:35:54.654001 | orchestrator | Create test instances ------------------------------------------------- 196.77s 2025-08-29 21:35:54.654011 | orchestrator | Add tag to instances --------------------------------------------------- 31.80s 2025-08-29 21:35:54.654078 | orchestrator | Add metadata to instances ---------------------------------------------- 23.54s 2025-08-29 21:35:54.654089 | orchestrator | Create test network topology ------------------------------------------- 14.73s 2025-08-29 21:35:54.654099 | orchestrator | Attach test volume ----------------------------------------------------- 13.41s 2025-08-29 21:35:54.654110 | orchestrator | Add member roles to user test ------------------------------------------ 11.73s 2025-08-29 21:35:54.654121 | orchestrator | Create test volume ------------------------------------------------------ 6.94s 2025-08-29 21:35:54.654132 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.58s 2025-08-29 21:35:54.654142 | orchestrator | Create floating ip address ---------------------------------------------- 5.11s 2025-08-29 21:35:54.654153 | orchestrator | Create ssh security group ----------------------------------------------- 4.77s 2025-08-29 21:35:54.654164 | orchestrator | Create test server group ------------------------------------------------ 4.57s 2025-08-29 21:35:54.654174 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.06s 2025-08-29 21:35:54.654185 | orchestrator | Create test user -------------------------------------------------------- 4.04s 2025-08-29 21:35:54.654195 | orchestrator | Create test keypair ----------------------------------------------------- 3.99s 2025-08-29 21:35:54.654206 | orchestrator | Create test-admin user -------------------------------------------------- 3.86s 2025-08-29 21:35:54.654217 | orchestrator | Create icmp security group ---------------------------------------------- 3.77s 2025-08-29 21:35:54.654234 | orchestrator | Create test project ----------------------------------------------------- 3.66s 2025-08-29 21:35:54.654245 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.64s 2025-08-29 21:35:54.654255 | orchestrator | Create test domain ------------------------------------------------------ 3.29s 2025-08-29 21:35:54.654266 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-08-29 21:35:54.924317 | orchestrator | + server_list 2025-08-29 21:35:54.924406 | orchestrator | + openstack --os-cloud test server list 2025-08-29 21:35:58.907265 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 21:35:58.907398 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-08-29 21:35:58.907412 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 21:35:58.907424 | orchestrator | | 659d447e-c6e5-427f-8392-5ed3253a62ed | test-4 | ACTIVE | auto_allocated_network=10.42.0.5, 192.168.112.108 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 21:35:58.907435 | orchestrator | | 8f78193e-d596-4c2d-8583-59ad9bfdd351 | test-3 | ACTIVE | auto_allocated_network=10.42.0.40, 192.168.112.191 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 21:35:58.907445 | orchestrator | | e16287b5-1469-4770-99ff-20bf35775370 | test-2 | ACTIVE | auto_allocated_network=10.42.0.31, 192.168.112.159 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 21:35:58.907456 | orchestrator | | 4f5fd9d6-c9b5-4500-810f-2fa02a7e5726 | test-1 | ACTIVE | auto_allocated_network=10.42.0.18, 192.168.112.147 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 21:35:58.907467 | orchestrator | | b4109098-18d9-48d1-9ec5-62fd8945123e | test | ACTIVE | auto_allocated_network=10.42.0.12, 192.168.112.190 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 21:35:58.907478 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 21:35:59.157544 | orchestrator | + openstack --os-cloud test server show test 2025-08-29 21:36:02.546594 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:02.546818 | orchestrator | | Field | Value | 2025-08-29 21:36:02.546841 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:02.546852 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 21:36:02.546864 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 21:36:02.546875 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 21:36:02.546886 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-08-29 21:36:02.546924 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 21:36:02.546936 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 21:36:02.546947 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 21:36:02.546958 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 21:36:02.547016 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 21:36:02.547029 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 21:36:02.547040 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 21:36:02.547051 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 21:36:02.547062 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 21:36:02.547073 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 21:36:02.547084 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 21:36:02.547108 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T21:31:45.000000 | 2025-08-29 21:36:02.547122 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 21:36:02.547134 | orchestrator | | accessIPv4 | | 2025-08-29 21:36:02.547147 | orchestrator | | accessIPv6 | | 2025-08-29 21:36:02.547159 | orchestrator | | addresses | auto_allocated_network=10.42.0.12, 192.168.112.190 | 2025-08-29 21:36:02.547179 | orchestrator | | config_drive | | 2025-08-29 21:36:02.547192 | orchestrator | | created | 2025-08-29T21:31:25Z | 2025-08-29 21:36:02.547205 | orchestrator | | description | None | 2025-08-29 21:36:02.547218 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 21:36:02.547231 | orchestrator | | hostId | e0cc5db39967c56248be58a69765f300c62265d33996887130c93d07 | 2025-08-29 21:36:02.547243 | orchestrator | | host_status | None | 2025-08-29 21:36:02.547262 | orchestrator | | id | b4109098-18d9-48d1-9ec5-62fd8945123e | 2025-08-29 21:36:02.547281 | orchestrator | | image | Cirros 0.6.2 (bd3f040c-4b06-4e63-9b86-c430661dc20f) | 2025-08-29 21:36:02.547295 | orchestrator | | key_name | test | 2025-08-29 21:36:02.547307 | orchestrator | | locked | False | 2025-08-29 21:36:02.547320 | orchestrator | | locked_reason | None | 2025-08-29 21:36:02.547333 | orchestrator | | name | test | 2025-08-29 21:36:02.547352 | orchestrator | | pinned_availability_zone | None | 2025-08-29 21:36:02.547365 | orchestrator | | progress | 0 | 2025-08-29 21:36:02.547377 | orchestrator | | project_id | 8d9030a1fe81454ca52595203a5f9cac | 2025-08-29 21:36:02.547390 | orchestrator | | properties | hostname='test' | 2025-08-29 21:36:02.547403 | orchestrator | | security_groups | name='icmp' | 2025-08-29 21:36:02.547421 | orchestrator | | | name='ssh' | 2025-08-29 21:36:02.547440 | orchestrator | | server_groups | None | 2025-08-29 21:36:02.547453 | orchestrator | | status | ACTIVE | 2025-08-29 21:36:02.547465 | orchestrator | | tags | test | 2025-08-29 21:36:02.547475 | orchestrator | | trusted_image_certificates | None | 2025-08-29 21:36:02.547486 | orchestrator | | updated | 2025-08-29T21:34:38Z | 2025-08-29 21:36:02.547503 | orchestrator | | user_id | 0905e562f2d8486595ce65e77fc741dc | 2025-08-29 21:36:02.547514 | orchestrator | | volumes_attached | delete_on_termination='False', id='35fbe6de-f206-4b3b-b07a-0655e4f3f833' | 2025-08-29 21:36:02.551013 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:02.840147 | orchestrator | + openstack --os-cloud test server show test-1 2025-08-29 21:36:06.255673 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:06.255805 | orchestrator | | Field | Value | 2025-08-29 21:36:06.255822 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:06.255834 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 21:36:06.255861 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 21:36:06.255873 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 21:36:06.255884 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-08-29 21:36:06.255895 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 21:36:06.255906 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 21:36:06.255917 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 21:36:06.255928 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 21:36:06.255956 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 21:36:06.256032 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 21:36:06.256045 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 21:36:06.256056 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 21:36:06.256068 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 21:36:06.256079 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 21:36:06.256099 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 21:36:06.256111 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T21:32:28.000000 | 2025-08-29 21:36:06.256122 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 21:36:06.256133 | orchestrator | | accessIPv4 | | 2025-08-29 21:36:06.256144 | orchestrator | | accessIPv6 | | 2025-08-29 21:36:06.256156 | orchestrator | | addresses | auto_allocated_network=10.42.0.18, 192.168.112.147 | 2025-08-29 21:36:06.256186 | orchestrator | | config_drive | | 2025-08-29 21:36:06.256198 | orchestrator | | created | 2025-08-29T21:32:07Z | 2025-08-29 21:36:06.256209 | orchestrator | | description | None | 2025-08-29 21:36:06.256220 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 21:36:06.256235 | orchestrator | | hostId | f96618873d199aaf715ae6ac9cf4d1e29c2bcbc382c94365eb0322de | 2025-08-29 21:36:06.256247 | orchestrator | | host_status | None | 2025-08-29 21:36:06.256258 | orchestrator | | id | 4f5fd9d6-c9b5-4500-810f-2fa02a7e5726 | 2025-08-29 21:36:06.256269 | orchestrator | | image | Cirros 0.6.2 (bd3f040c-4b06-4e63-9b86-c430661dc20f) | 2025-08-29 21:36:06.256280 | orchestrator | | key_name | test | 2025-08-29 21:36:06.256292 | orchestrator | | locked | False | 2025-08-29 21:36:06.256310 | orchestrator | | locked_reason | None | 2025-08-29 21:36:06.256321 | orchestrator | | name | test-1 | 2025-08-29 21:36:06.256338 | orchestrator | | pinned_availability_zone | None | 2025-08-29 21:36:06.256350 | orchestrator | | progress | 0 | 2025-08-29 21:36:06.256361 | orchestrator | | project_id | 8d9030a1fe81454ca52595203a5f9cac | 2025-08-29 21:36:06.256372 | orchestrator | | properties | hostname='test-1' | 2025-08-29 21:36:06.256388 | orchestrator | | security_groups | name='icmp' | 2025-08-29 21:36:06.256400 | orchestrator | | | name='ssh' | 2025-08-29 21:36:06.256411 | orchestrator | | server_groups | None | 2025-08-29 21:36:06.256421 | orchestrator | | status | ACTIVE | 2025-08-29 21:36:06.256433 | orchestrator | | tags | test | 2025-08-29 21:36:06.256452 | orchestrator | | trusted_image_certificates | None | 2025-08-29 21:36:06.256463 | orchestrator | | updated | 2025-08-29T21:34:43Z | 2025-08-29 21:36:06.256479 | orchestrator | | user_id | 0905e562f2d8486595ce65e77fc741dc | 2025-08-29 21:36:06.256491 | orchestrator | | volumes_attached | | 2025-08-29 21:36:06.267258 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:06.552921 | orchestrator | + openstack --os-cloud test server show test-2 2025-08-29 21:36:09.651859 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:09.651959 | orchestrator | | Field | Value | 2025-08-29 21:36:09.652027 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:09.652040 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 21:36:09.652051 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 21:36:09.652061 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 21:36:09.652092 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-08-29 21:36:09.652128 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 21:36:09.652138 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 21:36:09.652148 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 21:36:09.652158 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 21:36:09.652186 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 21:36:09.652197 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 21:36:09.652213 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 21:36:09.652223 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 21:36:09.652233 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 21:36:09.652243 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 21:36:09.652261 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 21:36:09.652272 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T21:33:08.000000 | 2025-08-29 21:36:09.652282 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 21:36:09.652291 | orchestrator | | accessIPv4 | | 2025-08-29 21:36:09.652301 | orchestrator | | accessIPv6 | | 2025-08-29 21:36:09.652311 | orchestrator | | addresses | auto_allocated_network=10.42.0.31, 192.168.112.159 | 2025-08-29 21:36:09.652327 | orchestrator | | config_drive | | 2025-08-29 21:36:09.652337 | orchestrator | | created | 2025-08-29T21:32:45Z | 2025-08-29 21:36:09.652348 | orchestrator | | description | None | 2025-08-29 21:36:09.652358 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 21:36:09.652374 | orchestrator | | hostId | 03a1835757194e3302593a0617a58ee7acda3b4c4f1018fd3af313a9 | 2025-08-29 21:36:09.652384 | orchestrator | | host_status | None | 2025-08-29 21:36:09.652394 | orchestrator | | id | e16287b5-1469-4770-99ff-20bf35775370 | 2025-08-29 21:36:09.652404 | orchestrator | | image | Cirros 0.6.2 (bd3f040c-4b06-4e63-9b86-c430661dc20f) | 2025-08-29 21:36:09.652414 | orchestrator | | key_name | test | 2025-08-29 21:36:09.652424 | orchestrator | | locked | False | 2025-08-29 21:36:09.652434 | orchestrator | | locked_reason | None | 2025-08-29 21:36:09.652444 | orchestrator | | name | test-2 | 2025-08-29 21:36:09.652474 | orchestrator | | pinned_availability_zone | None | 2025-08-29 21:36:09.652485 | orchestrator | | progress | 0 | 2025-08-29 21:36:09.652499 | orchestrator | | project_id | 8d9030a1fe81454ca52595203a5f9cac | 2025-08-29 21:36:09.652515 | orchestrator | | properties | hostname='test-2' | 2025-08-29 21:36:09.652525 | orchestrator | | security_groups | name='icmp' | 2025-08-29 21:36:09.652535 | orchestrator | | | name='ssh' | 2025-08-29 21:36:09.652544 | orchestrator | | server_groups | None | 2025-08-29 21:36:09.652554 | orchestrator | | status | ACTIVE | 2025-08-29 21:36:09.652564 | orchestrator | | tags | test | 2025-08-29 21:36:09.652574 | orchestrator | | trusted_image_certificates | None | 2025-08-29 21:36:09.652584 | orchestrator | | updated | 2025-08-29T21:34:47Z | 2025-08-29 21:36:09.652598 | orchestrator | | user_id | 0905e562f2d8486595ce65e77fc741dc | 2025-08-29 21:36:09.652608 | orchestrator | | volumes_attached | | 2025-08-29 21:36:09.657352 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:09.919322 | orchestrator | + openstack --os-cloud test server show test-3 2025-08-29 21:36:13.077445 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:13.078286 | orchestrator | | Field | Value | 2025-08-29 21:36:13.078376 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:13.078398 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 21:36:13.078420 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 21:36:13.078434 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 21:36:13.078446 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-08-29 21:36:13.078457 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 21:36:13.078468 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 21:36:13.078479 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 21:36:13.078490 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 21:36:13.078572 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 21:36:13.078587 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 21:36:13.078598 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 21:36:13.078609 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 21:36:13.078620 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 21:36:13.078631 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 21:36:13.078642 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 21:36:13.078653 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T21:33:44.000000 | 2025-08-29 21:36:13.078664 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 21:36:13.078675 | orchestrator | | accessIPv4 | | 2025-08-29 21:36:13.078737 | orchestrator | | accessIPv6 | | 2025-08-29 21:36:13.078755 | orchestrator | | addresses | auto_allocated_network=10.42.0.40, 192.168.112.191 | 2025-08-29 21:36:13.078774 | orchestrator | | config_drive | | 2025-08-29 21:36:13.078786 | orchestrator | | created | 2025-08-29T21:33:28Z | 2025-08-29 21:36:13.078797 | orchestrator | | description | None | 2025-08-29 21:36:13.078808 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 21:36:13.078820 | orchestrator | | hostId | f96618873d199aaf715ae6ac9cf4d1e29c2bcbc382c94365eb0322de | 2025-08-29 21:36:13.078831 | orchestrator | | host_status | None | 2025-08-29 21:36:13.078842 | orchestrator | | id | 8f78193e-d596-4c2d-8583-59ad9bfdd351 | 2025-08-29 21:36:13.078853 | orchestrator | | image | Cirros 0.6.2 (bd3f040c-4b06-4e63-9b86-c430661dc20f) | 2025-08-29 21:36:13.078864 | orchestrator | | key_name | test | 2025-08-29 21:36:13.078882 | orchestrator | | locked | False | 2025-08-29 21:36:13.078893 | orchestrator | | locked_reason | None | 2025-08-29 21:36:13.078909 | orchestrator | | name | test-3 | 2025-08-29 21:36:13.078926 | orchestrator | | pinned_availability_zone | None | 2025-08-29 21:36:13.078938 | orchestrator | | progress | 0 | 2025-08-29 21:36:13.078949 | orchestrator | | project_id | 8d9030a1fe81454ca52595203a5f9cac | 2025-08-29 21:36:13.078960 | orchestrator | | properties | hostname='test-3' | 2025-08-29 21:36:13.078971 | orchestrator | | security_groups | name='icmp' | 2025-08-29 21:36:13.079004 | orchestrator | | | name='ssh' | 2025-08-29 21:36:13.079015 | orchestrator | | server_groups | None | 2025-08-29 21:36:13.079026 | orchestrator | | status | ACTIVE | 2025-08-29 21:36:13.079046 | orchestrator | | tags | test | 2025-08-29 21:36:13.079057 | orchestrator | | trusted_image_certificates | None | 2025-08-29 21:36:13.079068 | orchestrator | | updated | 2025-08-29T21:34:52Z | 2025-08-29 21:36:13.079104 | orchestrator | | user_id | 0905e562f2d8486595ce65e77fc741dc | 2025-08-29 21:36:13.079117 | orchestrator | | volumes_attached | | 2025-08-29 21:36:13.082364 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:13.342263 | orchestrator | + openstack --os-cloud test server show test-4 2025-08-29 21:36:16.534095 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:16.534214 | orchestrator | | Field | Value | 2025-08-29 21:36:16.534230 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:16.534242 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 21:36:16.534276 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 21:36:16.534288 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 21:36:16.534300 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-08-29 21:36:16.534311 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 21:36:16.534323 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 21:36:16.534334 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 21:36:16.534346 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 21:36:16.534375 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 21:36:16.534403 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 21:36:16.534415 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 21:36:16.534426 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 21:36:16.534445 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 21:36:16.534457 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 21:36:16.534468 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 21:36:16.534479 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T21:34:17.000000 | 2025-08-29 21:36:16.534490 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 21:36:16.534507 | orchestrator | | accessIPv4 | | 2025-08-29 21:36:16.534518 | orchestrator | | accessIPv6 | | 2025-08-29 21:36:16.534530 | orchestrator | | addresses | auto_allocated_network=10.42.0.5, 192.168.112.108 | 2025-08-29 21:36:16.534548 | orchestrator | | config_drive | | 2025-08-29 21:36:16.534559 | orchestrator | | created | 2025-08-29T21:34:00Z | 2025-08-29 21:36:16.534570 | orchestrator | | description | None | 2025-08-29 21:36:16.534588 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 21:36:16.534599 | orchestrator | | hostId | e0cc5db39967c56248be58a69765f300c62265d33996887130c93d07 | 2025-08-29 21:36:16.534610 | orchestrator | | host_status | None | 2025-08-29 21:36:16.534621 | orchestrator | | id | 659d447e-c6e5-427f-8392-5ed3253a62ed | 2025-08-29 21:36:16.534632 | orchestrator | | image | Cirros 0.6.2 (bd3f040c-4b06-4e63-9b86-c430661dc20f) | 2025-08-29 21:36:16.534643 | orchestrator | | key_name | test | 2025-08-29 21:36:16.534659 | orchestrator | | locked | False | 2025-08-29 21:36:16.534670 | orchestrator | | locked_reason | None | 2025-08-29 21:36:16.534681 | orchestrator | | name | test-4 | 2025-08-29 21:36:16.534698 | orchestrator | | pinned_availability_zone | None | 2025-08-29 21:36:16.534710 | orchestrator | | progress | 0 | 2025-08-29 21:36:16.534729 | orchestrator | | project_id | 8d9030a1fe81454ca52595203a5f9cac | 2025-08-29 21:36:16.534748 | orchestrator | | properties | hostname='test-4' | 2025-08-29 21:36:16.534768 | orchestrator | | security_groups | name='icmp' | 2025-08-29 21:36:16.534789 | orchestrator | | | name='ssh' | 2025-08-29 21:36:16.534810 | orchestrator | | server_groups | None | 2025-08-29 21:36:16.534832 | orchestrator | | status | ACTIVE | 2025-08-29 21:36:16.534852 | orchestrator | | tags | test | 2025-08-29 21:36:16.534864 | orchestrator | | trusted_image_certificates | None | 2025-08-29 21:36:16.534875 | orchestrator | | updated | 2025-08-29T21:34:56Z | 2025-08-29 21:36:16.534892 | orchestrator | | user_id | 0905e562f2d8486595ce65e77fc741dc | 2025-08-29 21:36:16.534904 | orchestrator | | volumes_attached | | 2025-08-29 21:36:16.540271 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 21:36:16.787135 | orchestrator | + server_ping 2025-08-29 21:36:16.788664 | orchestrator | ++ tr -d '\r' 2025-08-29 21:36:16.788704 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-08-29 21:36:19.595405 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 21:36:19.595501 | orchestrator | + ping -c3 192.168.112.190 2025-08-29 21:36:19.614921 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2025-08-29 21:36:19.614978 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=10.6 ms 2025-08-29 21:36:20.608456 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=2.50 ms 2025-08-29 21:36:21.609658 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=2.36 ms 2025-08-29 21:36:21.609754 | orchestrator | 2025-08-29 21:36:21.609770 | orchestrator | --- 192.168.112.190 ping statistics --- 2025-08-29 21:36:21.609783 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 21:36:21.609795 | orchestrator | rtt min/avg/max/mdev = 2.356/5.156/10.617/3.861 ms 2025-08-29 21:36:21.610353 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 21:36:21.610380 | orchestrator | + ping -c3 192.168.112.108 2025-08-29 21:36:21.621551 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-08-29 21:36:21.621580 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=6.68 ms 2025-08-29 21:36:22.620280 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.99 ms 2025-08-29 21:36:23.621270 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.02 ms 2025-08-29 21:36:23.621367 | orchestrator | 2025-08-29 21:36:23.621383 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-08-29 21:36:23.621396 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 21:36:23.621408 | orchestrator | rtt min/avg/max/mdev = 2.017/3.895/6.683/2.010 ms 2025-08-29 21:36:23.621420 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 21:36:23.621432 | orchestrator | + ping -c3 192.168.112.159 2025-08-29 21:36:23.635456 | orchestrator | PING 192.168.112.159 (192.168.112.159) 56(84) bytes of data. 2025-08-29 21:36:23.635504 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=1 ttl=63 time=9.61 ms 2025-08-29 21:36:24.630644 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=2 ttl=63 time=2.71 ms 2025-08-29 21:36:25.631896 | orchestrator | 64 bytes from 192.168.112.159: icmp_seq=3 ttl=63 time=1.95 ms 2025-08-29 21:36:25.632040 | orchestrator | 2025-08-29 21:36:25.632059 | orchestrator | --- 192.168.112.159 ping statistics --- 2025-08-29 21:36:25.632073 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-08-29 21:36:25.632085 | orchestrator | rtt min/avg/max/mdev = 1.953/4.759/9.613/3.446 ms 2025-08-29 21:36:25.634218 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 21:36:25.634245 | orchestrator | + ping -c3 192.168.112.147 2025-08-29 21:36:25.648185 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-08-29 21:36:25.648242 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=10.6 ms 2025-08-29 21:36:26.642279 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.46 ms 2025-08-29 21:36:27.643625 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.32 ms 2025-08-29 21:36:27.643727 | orchestrator | 2025-08-29 21:36:27.643742 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-08-29 21:36:27.643790 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 21:36:27.643802 | orchestrator | rtt min/avg/max/mdev = 2.324/5.125/10.591/3.865 ms 2025-08-29 21:36:27.644853 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 21:36:27.644875 | orchestrator | + ping -c3 192.168.112.191 2025-08-29 21:36:27.658925 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2025-08-29 21:36:27.658948 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=10.2 ms 2025-08-29 21:36:28.653238 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.51 ms 2025-08-29 21:36:29.654382 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.78 ms 2025-08-29 21:36:29.655352 | orchestrator | 2025-08-29 21:36:29.655388 | orchestrator | --- 192.168.112.191 ping statistics --- 2025-08-29 21:36:29.655402 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 21:36:29.655412 | orchestrator | rtt min/avg/max/mdev = 1.784/4.818/10.159/3.788 ms 2025-08-29 21:36:29.655436 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-08-29 21:36:30.058223 | orchestrator | ok: Runtime: 0:11:25.265434 2025-08-29 21:36:30.102752 | 2025-08-29 21:36:30.102917 | TASK [Run tempest] 2025-08-29 21:36:30.637432 | orchestrator | skipping: Conditional result was False 2025-08-29 21:36:30.654383 | 2025-08-29 21:36:30.654593 | TASK [Check prometheus alert status] 2025-08-29 21:36:31.189119 | orchestrator | skipping: Conditional result was False 2025-08-29 21:36:31.192232 | 2025-08-29 21:36:31.192471 | PLAY RECAP 2025-08-29 21:36:31.192627 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-08-29 21:36:31.192691 | 2025-08-29 21:36:31.432524 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-08-29 21:36:31.436941 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 21:36:32.195817 | 2025-08-29 21:36:32.195984 | PLAY [Post output play] 2025-08-29 21:36:32.212279 | 2025-08-29 21:36:32.212454 | LOOP [stage-output : Register sources] 2025-08-29 21:36:32.283914 | 2025-08-29 21:36:32.284245 | TASK [stage-output : Check sudo] 2025-08-29 21:36:33.172485 | orchestrator | sudo: a password is required 2025-08-29 21:36:33.325871 | orchestrator | ok: Runtime: 0:00:00.009175 2025-08-29 21:36:33.337693 | 2025-08-29 21:36:33.337847 | LOOP [stage-output : Set source and destination for files and folders] 2025-08-29 21:36:33.377890 | 2025-08-29 21:36:33.378142 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-08-29 21:36:33.457790 | orchestrator | ok 2025-08-29 21:36:33.466616 | 2025-08-29 21:36:33.466751 | LOOP [stage-output : Ensure target folders exist] 2025-08-29 21:36:33.911631 | orchestrator | ok: "docs" 2025-08-29 21:36:33.912011 | 2025-08-29 21:36:34.153794 | orchestrator | ok: "artifacts" 2025-08-29 21:36:34.428957 | orchestrator | ok: "logs" 2025-08-29 21:36:34.447158 | 2025-08-29 21:36:34.447321 | LOOP [stage-output : Copy files and folders to staging folder] 2025-08-29 21:36:34.487929 | 2025-08-29 21:36:34.488259 | TASK [stage-output : Make all log files readable] 2025-08-29 21:36:34.749466 | orchestrator | ok 2025-08-29 21:36:34.757877 | 2025-08-29 21:36:34.758007 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-08-29 21:36:34.792822 | orchestrator | skipping: Conditional result was False 2025-08-29 21:36:34.808711 | 2025-08-29 21:36:34.808862 | TASK [stage-output : Discover log files for compression] 2025-08-29 21:36:34.833158 | orchestrator | skipping: Conditional result was False 2025-08-29 21:36:34.850036 | 2025-08-29 21:36:34.850196 | LOOP [stage-output : Archive everything from logs] 2025-08-29 21:36:34.897947 | 2025-08-29 21:36:34.898134 | PLAY [Post cleanup play] 2025-08-29 21:36:34.907425 | 2025-08-29 21:36:34.907552 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 21:36:34.967494 | orchestrator | ok 2025-08-29 21:36:34.980375 | 2025-08-29 21:36:34.980535 | TASK [Set cloud fact (local deployment)] 2025-08-29 21:36:35.014958 | orchestrator | skipping: Conditional result was False 2025-08-29 21:36:35.031886 | 2025-08-29 21:36:35.032046 | TASK [Clean the cloud environment] 2025-08-29 21:36:38.635613 | orchestrator | 2025-08-29 21:36:38 - clean up servers 2025-08-29 21:36:39.440691 | orchestrator | 2025-08-29 21:36:39 - testbed-manager 2025-08-29 21:36:39.526747 | orchestrator | 2025-08-29 21:36:39 - testbed-node-5 2025-08-29 21:36:39.621445 | orchestrator | 2025-08-29 21:36:39 - testbed-node-1 2025-08-29 21:36:39.712047 | orchestrator | 2025-08-29 21:36:39 - testbed-node-0 2025-08-29 21:36:39.795190 | orchestrator | 2025-08-29 21:36:39 - testbed-node-2 2025-08-29 21:36:39.893436 | orchestrator | 2025-08-29 21:36:39 - testbed-node-4 2025-08-29 21:36:39.991190 | orchestrator | 2025-08-29 21:36:39 - testbed-node-3 2025-08-29 21:36:40.083753 | orchestrator | 2025-08-29 21:36:40 - clean up keypairs 2025-08-29 21:36:40.101867 | orchestrator | 2025-08-29 21:36:40 - testbed 2025-08-29 21:36:40.125818 | orchestrator | 2025-08-29 21:36:40 - wait for servers to be gone 2025-08-29 21:36:50.976580 | orchestrator | 2025-08-29 21:36:50 - clean up ports 2025-08-29 21:36:51.155466 | orchestrator | 2025-08-29 21:36:51 - 59341a1e-17fa-4201-b2df-effd43c6eba0 2025-08-29 21:36:51.631697 | orchestrator | 2025-08-29 21:36:51 - 5e5f38cd-c5ed-4972-a5a2-cb025d231b82 2025-08-29 21:36:51.880344 | orchestrator | 2025-08-29 21:36:51 - 79c56ed9-4f4a-4648-a8ae-b0cb5fa6eb4d 2025-08-29 21:36:52.090413 | orchestrator | 2025-08-29 21:36:52 - 84fc9583-0723-4ef4-9e0e-628dd699d4a8 2025-08-29 21:36:52.291173 | orchestrator | 2025-08-29 21:36:52 - e582518e-db95-49c4-930a-e4bb0f837e8d 2025-08-29 21:36:52.491411 | orchestrator | 2025-08-29 21:36:52 - f491ed68-62f2-4da8-bc21-fe7660d0e4a5 2025-08-29 21:36:52.690898 | orchestrator | 2025-08-29 21:36:52 - fb5f6d9c-fc4f-4ad3-8b26-bb568b4aad94 2025-08-29 21:36:52.956399 | orchestrator | 2025-08-29 21:36:52 - clean up volumes 2025-08-29 21:36:53.068658 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-manager-base 2025-08-29 21:36:53.107440 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-5-node-base 2025-08-29 21:36:53.148260 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-4-node-base 2025-08-29 21:36:53.191811 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-2-node-base 2025-08-29 21:36:53.231683 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-3-node-base 2025-08-29 21:36:53.273332 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-1-node-base 2025-08-29 21:36:53.316391 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-0-node-base 2025-08-29 21:36:53.357066 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-5-node-5 2025-08-29 21:36:53.396895 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-3-node-3 2025-08-29 21:36:53.439486 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-8-node-5 2025-08-29 21:36:53.477966 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-6-node-3 2025-08-29 21:36:53.521728 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-0-node-3 2025-08-29 21:36:53.563497 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-4-node-4 2025-08-29 21:36:53.603891 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-2-node-5 2025-08-29 21:36:53.647469 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-7-node-4 2025-08-29 21:36:53.687549 | orchestrator | 2025-08-29 21:36:53 - testbed-volume-1-node-4 2025-08-29 21:36:53.726619 | orchestrator | 2025-08-29 21:36:53 - disconnect routers 2025-08-29 21:36:54.359750 | orchestrator | 2025-08-29 21:36:54 - testbed 2025-08-29 21:36:55.366348 | orchestrator | 2025-08-29 21:36:55 - clean up subnets 2025-08-29 21:36:55.418314 | orchestrator | 2025-08-29 21:36:55 - subnet-testbed-management 2025-08-29 21:36:55.570211 | orchestrator | 2025-08-29 21:36:55 - clean up networks 2025-08-29 21:36:56.292176 | orchestrator | 2025-08-29 21:36:56 - net-testbed-management 2025-08-29 21:36:56.549241 | orchestrator | 2025-08-29 21:36:56 - clean up security groups 2025-08-29 21:36:56.588114 | orchestrator | 2025-08-29 21:36:56 - testbed-management 2025-08-29 21:36:56.700942 | orchestrator | 2025-08-29 21:36:56 - testbed-node 2025-08-29 21:36:56.808672 | orchestrator | 2025-08-29 21:36:56 - clean up floating ips 2025-08-29 21:36:57.262756 | orchestrator | 2025-08-29 21:36:57 - 81.163.192.51 2025-08-29 21:36:57.598962 | orchestrator | 2025-08-29 21:36:57 - clean up routers 2025-08-29 21:36:57.727029 | orchestrator | 2025-08-29 21:36:57 - testbed 2025-08-29 21:36:58.615223 | orchestrator | ok: Runtime: 0:00:23.235412 2025-08-29 21:36:58.617761 | 2025-08-29 21:36:58.617873 | PLAY RECAP 2025-08-29 21:36:58.617983 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-08-29 21:36:58.618041 | 2025-08-29 21:36:58.747001 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 21:36:58.749678 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 21:36:59.500203 | 2025-08-29 21:36:59.500384 | PLAY [Cleanup play] 2025-08-29 21:36:59.517128 | 2025-08-29 21:36:59.517291 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 21:36:59.565456 | orchestrator | ok 2025-08-29 21:36:59.572524 | 2025-08-29 21:36:59.572668 | TASK [Set cloud fact (local deployment)] 2025-08-29 21:36:59.607203 | orchestrator | skipping: Conditional result was False 2025-08-29 21:36:59.618983 | 2025-08-29 21:36:59.619121 | TASK [Clean the cloud environment] 2025-08-29 21:37:00.752200 | orchestrator | 2025-08-29 21:37:00 - clean up servers 2025-08-29 21:37:01.232585 | orchestrator | 2025-08-29 21:37:01 - clean up keypairs 2025-08-29 21:37:01.248851 | orchestrator | 2025-08-29 21:37:01 - wait for servers to be gone 2025-08-29 21:37:01.292539 | orchestrator | 2025-08-29 21:37:01 - clean up ports 2025-08-29 21:37:01.371374 | orchestrator | 2025-08-29 21:37:01 - clean up volumes 2025-08-29 21:37:01.441563 | orchestrator | 2025-08-29 21:37:01 - disconnect routers 2025-08-29 21:37:01.472909 | orchestrator | 2025-08-29 21:37:01 - clean up subnets 2025-08-29 21:37:01.493508 | orchestrator | 2025-08-29 21:37:01 - clean up networks 2025-08-29 21:37:01.643639 | orchestrator | 2025-08-29 21:37:01 - clean up security groups 2025-08-29 21:37:01.675496 | orchestrator | 2025-08-29 21:37:01 - clean up floating ips 2025-08-29 21:37:01.699090 | orchestrator | 2025-08-29 21:37:01 - clean up routers 2025-08-29 21:37:02.156350 | orchestrator | ok: Runtime: 0:00:01.323720 2025-08-29 21:37:02.158349 | 2025-08-29 21:37:02.158480 | PLAY RECAP 2025-08-29 21:37:02.158539 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-08-29 21:37:02.158563 | 2025-08-29 21:37:02.282151 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 21:37:02.283308 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 21:37:03.013814 | 2025-08-29 21:37:03.014015 | PLAY [Base post-fetch] 2025-08-29 21:37:03.031132 | 2025-08-29 21:37:03.031300 | TASK [fetch-output : Set log path for multiple nodes] 2025-08-29 21:37:03.097194 | orchestrator | skipping: Conditional result was False 2025-08-29 21:37:03.113368 | 2025-08-29 21:37:03.113612 | TASK [fetch-output : Set log path for single node] 2025-08-29 21:37:03.162327 | orchestrator | ok 2025-08-29 21:37:03.172633 | 2025-08-29 21:37:03.172793 | LOOP [fetch-output : Ensure local output dirs] 2025-08-29 21:37:03.661659 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/fe7640c6ad7b40cc86499111616a1a68/work/logs" 2025-08-29 21:37:03.933525 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/fe7640c6ad7b40cc86499111616a1a68/work/artifacts" 2025-08-29 21:37:04.199614 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/fe7640c6ad7b40cc86499111616a1a68/work/docs" 2025-08-29 21:37:04.222201 | 2025-08-29 21:37:04.222374 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-08-29 21:37:05.151280 | orchestrator | changed: .d..t...... ./ 2025-08-29 21:37:05.151793 | orchestrator | changed: All items complete 2025-08-29 21:37:05.151870 | 2025-08-29 21:37:05.880382 | orchestrator | changed: .d..t...... ./ 2025-08-29 21:37:06.610260 | orchestrator | changed: .d..t...... ./ 2025-08-29 21:37:06.638590 | 2025-08-29 21:37:06.638727 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-08-29 21:37:06.675006 | orchestrator | skipping: Conditional result was False 2025-08-29 21:37:06.677976 | orchestrator | skipping: Conditional result was False 2025-08-29 21:37:06.702804 | 2025-08-29 21:37:06.702960 | PLAY RECAP 2025-08-29 21:37:06.703047 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-08-29 21:37:06.703092 | 2025-08-29 21:37:06.828989 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 21:37:06.830100 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 21:37:07.574157 | 2025-08-29 21:37:07.574335 | PLAY [Base post] 2025-08-29 21:37:07.589811 | 2025-08-29 21:37:07.589951 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-08-29 21:37:08.532953 | orchestrator | changed 2025-08-29 21:37:08.541800 | 2025-08-29 21:37:08.541919 | PLAY RECAP 2025-08-29 21:37:08.541992 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-08-29 21:37:08.542059 | 2025-08-29 21:37:08.687917 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 21:37:08.689008 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-08-29 21:37:09.499344 | 2025-08-29 21:37:09.499552 | PLAY [Base post-logs] 2025-08-29 21:37:09.511272 | 2025-08-29 21:37:09.511467 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-08-29 21:37:09.958590 | localhost | changed 2025-08-29 21:37:09.975047 | 2025-08-29 21:37:09.975242 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-08-29 21:37:10.016302 | localhost | ok 2025-08-29 21:37:10.023122 | 2025-08-29 21:37:10.023330 | TASK [Set zuul-log-path fact] 2025-08-29 21:37:10.056254 | localhost | ok 2025-08-29 21:37:10.071362 | 2025-08-29 21:37:10.071570 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 21:37:10.098788 | localhost | ok 2025-08-29 21:37:10.105738 | 2025-08-29 21:37:10.105907 | TASK [upload-logs : Create log directories] 2025-08-29 21:37:10.608072 | localhost | changed 2025-08-29 21:37:10.613942 | 2025-08-29 21:37:10.614118 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-08-29 21:37:11.128196 | localhost -> localhost | ok: Runtime: 0:00:00.005688 2025-08-29 21:37:11.137977 | 2025-08-29 21:37:11.138173 | TASK [upload-logs : Upload logs to log server] 2025-08-29 21:37:11.717955 | localhost | Output suppressed because no_log was given 2025-08-29 21:37:11.722030 | 2025-08-29 21:37:11.722207 | LOOP [upload-logs : Compress console log and json output] 2025-08-29 21:37:11.779539 | localhost | skipping: Conditional result was False 2025-08-29 21:37:11.784908 | localhost | skipping: Conditional result was False 2025-08-29 21:37:11.796193 | 2025-08-29 21:37:11.796504 | LOOP [upload-logs : Upload compressed console log and json output] 2025-08-29 21:37:11.845247 | localhost | skipping: Conditional result was False 2025-08-29 21:37:11.845950 | 2025-08-29 21:37:11.849212 | localhost | skipping: Conditional result was False 2025-08-29 21:37:11.858310 | 2025-08-29 21:37:11.858635 | LOOP [upload-logs : Upload console log and json output]